Jump to content

Draft:The functional unit of neural circuits

From Wikipedia, the free encyclopedia
  • Comment: Needs to be in the appropriate genre. Stuartyeates (talk) 07:25, 29 May 2023 (UTC)

Content

The functional unit of neural circuits is a hierarchical neural structure that represents a specific object, which is typically identified by a particular word. This structure can be distinguished within the human nervous system, in its higher levels located in the cerebral cortex. Within this structure, the processes of perceiving the image of a given object, memorizing the pattern of this object, and retrieving its representation from memory occur.

This structure can be stimulated both by the receptors of the image and by neurons located in speech areas, which represent the word or linguistic phrase denoting the given object. Understanding the functional unit of neural circuits is key to comprehending more complex mental operations, such as imagining a desired situation and searching for a solution.

In addition to investigating the structure of this functional unit within the field of neuroscience, one may consider its counterpart composed of artificial model neurons.[1] Familiarity with the concept of the functional unit of neural circuits is important for understanding the current progress of artificial intelligence as well as further advancements in this field, particularly in the pursuit of creating self-aware systems.

Neural substrate for perceptions and imagery[edit]

The visual perceptions and the imagery of visually perceived objects are realized on the basis of the same neural substrates. [2] [3] [4] [5] Joel Pearson and his coworkers put it in words “that visual mental imagery is a depictive internal representation that functions like a weak form of perception”.[3] Nadine Dijkstra with coworkers develops this idea and writes that: “For decades, the extent to which visual imagery relies on the same neural mechanisms as visual perception has been a topic of debate”. [4] These researches review recent neuroimaging studies and conclude that “there is a large overlap in neural processing during perception and imagery” and that “neural representations of imagined and perceived stimuli are similar in the visual, parietal, and frontal cortex” and that “perception and imagery seem to rely on similar top-down connectivity” [5]. Rebecca Keogh and coworkers emphasize that: “Visual imagery—the ability to ‘see with the mind’s eye’—is ubiquitous in daily life for many people; however, the strength and vividness with which people are able to imagine varies substantially from one individual to another”.[5]

The graphic scheme of the functional unit of neuronal circuits[edit]

Figure 1. The outline of the functional unit of neural circuits, proposed as description of its biological aspect.

The structures essential for the realization of imagery are illustrated in Figure 1. This scheme presents the theoretical model of neural circuits, realizing perceptions, enabling the memorization of images and their recall from memory in the form of imagery. It explains also the difference between the perception of novel and unfamiliar objects.  The simplest example of the process when the recalling a mental images happens is the stimulation coming from the speech area, which remembers the verbal denotations (names) of familiar objects.

Cortical neurons have recurrent axons, which activate interneurons going to the lower levels. When excitation reaches the object neurons at subsequent moments, neurons from lower levels are stimulated secondarily. Thus, there are conditions for the circulation of impulses between neurons of higher and lower levels of afferent pathways. It has been experimentally confirmed that such oscillations occur.[6] [7] [8] [9]

Thus, if object neurons are excited from the side of the speech area, there is a circulation of impulses between neurons of the higher and lower layers of the same hierarchical structure that was excited during the perceptions of this object. These oscillations are the physical substrate of imagination. During complex mental processes like searching for solutions, the working memory structures instantiate the necessary, useful mental images. [10] This maintenance of imaginal activity is accomplished by the cortico-hippocampal indexing loops.

When we repeatedly perceive an image, the synaptic weights of the hierarchical structure, active during such perceptions are altered and a learning process takes place as we remember the pattern of the learned object. If a familiar object is perceived, there is secondary activity of the cortico-hippocampal indexing loops; in this way, the structure of the known, recognized object is stimulated from two directions.

The functional unit of neuronal circuits has the 'back propoagation' connections.[edit]

Figure 5. The intuitive ilustration of the structure of a Multilayer Perceptron.

The development of artificial intelligence can be traced back to the 1940s, with significant advances occurring in the subsequent decades. In 1958, Frank Rosenblatt introduced the perceptron, a pioneering model for binary classification tasks.[11] The perceptron was designed as a single-layer artificial neural network with adjustable weights that were updated through an iterative learning process. In years 1970s-1980s researchers  developed  multilayer perceptrons, which were capable of solving non-linear problems. A multilayer perceptrons consist of multiple layers of interconnected neurons, with each layer performing a specific transformation on the input data. The backpropagation algorithm, which is fundamental to the training of artificial neural networks, was developed independently by multiple researchers in next years. Some of the key contributors to the development of the backpropagation principle are: David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. [12]

This principle is now widely used.  The principle of backpropagation is one of the key algorithms used in machine learning, in particular in artificial neural networks. It consists in propagating network output errors back through the network in order to determine and correct the weights of connections between neurons.

The learning process of a neural network begins with assigning random values of the weights of connections between neurons. Then the network is trained on the training set and the output of the network is compared with the expected result. As a result of this comparison, the error value is determined, i.e. the difference between the result obtained by the network and the expected result.

Then, the principle of backpropagation consists in propagating the error values of the network output backwards through each layer of the network, from the output to the input, in order to determine and correct the weights of connections between neurons. The link weights are modified based on the error value in such a way that the next training iteration is closer to the expected result. This process is repeated many times until satisfactory accuracy of the result is achieved. Thanks to the principle of backpropagation, neural networks are able to learn from data samples, and the resulting modifications of connection weights allow more and more accurate predictions of the result for new data.

This brief description of the functioning of backward connections, typical for the considerations of artificial intelligence system creators, does not, however, take into account the emphasized significance of resonant oscillations during the implementation of mental imagery. The maintenance of active mental images by the working memory is essential, for instance, in the search for problem-solving strategies. Oscillations, the circulation of impulses along back-propagation pathways, as indicated in Figure 1, are crucial for consideration of the importance of the brain's electromagnetic field.

The feedback connections in natural human neural circuits are probably important for emergence of self-awareness.[edit]

It is likely that the existence of feedback connections in natural human neural circuits plays an additional function. During the realization of imagination, there is a recurring circulation of impulses marked in Figure 1 with a dashed line. These recurring excitations evoke the magnetic field. It is proved that an endogenous magnetic field is created within the brain. It is recorded by routine magnetoencephalography. [13] [14] Many neuroscientists are convinced that the existence of the endogenous magnetic field is essential for the emergence of consciousness.[15] [16] [17] [18][[19] These authors assume that the basic process evoking this phenomenon consist not only on actions occurring over time but also on appearance of a certain spatial wholeness.[16] [17]  Johnjoe McFadden points out that when looking for a physical medium supporting something that has the feature of a spatial structure, one must appeal to physical fields. [16][17] He remarks that taking into account the pathways running backwards to the lower levels, the propagation of neuronal potential changes (firing, action potentials) induces the formation of magnetic fields which overlap and combine to generate “the brain’s global EM field”.[16] He remarks that the human brain can be conceived as the assembly of “around 100 billion EMF transmitters”.[16] The author emphasizes, though, that consciousness occurs when there is a massive synchronization of neuronal activity and also when repetitive oscillations in neuronal circuits occur. He emphasizes that conscious neuronal processing should be associated with “re-entrant circuits, essentially closed loops of neuronal activity whereby neuronal outputs are fed back into input neurons”. [16]

Authors of groundbreaking works on contemporary, highly advanced artificial intelligence systems, known as  "Deep Learning", "Reinforcement Learning", "Transfer Learning" and  "Generative Adversarial Networks", "Convolutional neural networks", "Recurrent neural networks" emphazise that their operation is based on backpropagation algorithms.[20] [21] However, it is worth noting that diagrams illustrating their structure generally do not show these connections. One of the few exceptions to such figures is the diagram included in the paper  of Mary Webb et al.[22]

The presented theoretical model of neural circuits, realizing perceptions, enabling the memorization of images and their recall from memory facilitate the mental integration and understanding of  the eventual role of circuits and aroused magnetic field in emergence of consciousness.[23]

Moving towards conscious artificial intelligence systems[edit]

Some researchers posit that consciousness may spontaneously arise in artificial intelligence systems as a consequence of increasing complexity.[24] [25] Nonetheless, it appears that the crux of self-awareness lies in the capacity to generate a self-image.[23] Creating of the "image of oneself"  requires the existence of recursive feedback loops in multiple strata within the hierarchical structure.[23] [26] Consequently, a comprehensive knowledge of the structural organization and functional constituents of neuronal circuits is a critical prerequisite for the consideration of artificial intelligence systems that could possess self-awareness. This also facilitates discourse whether the phenomenon of consciousness will spontaneously emerge in artificial intelligence systems. [1] [27] [28]

See also[edit]

References[edit]

  1. ^ a b Brodziak, Andrzej; Romaniuk, Filip (2023). "The functional unit of neural circuits and its relations to eventual sentience of artificial intelligence systems". Qeios. doi:10.32388/82VRPG.
  2. ^ Brodziak, A (2001). "Neurophysiology of the mental image". Med. Sci. Monit. 7 (3): 534–538. PMID 11386038.
  3. ^ a b Pearson, J; et, al (2015). "Mental Imagery: Functional Mechanisms and Clinical Applications". Trends Cogn. Sci. 19 (10): 590–602. doi:10.1016/j.tics.2015.08.003. PMC 4595480. PMID 26412097.
  4. ^ a b Dijkstra, N; et, al (2019). "Shared Neural Mechanisms of Visual Perception and Imagery". Trends Cogn. Sci. 23 (5): 423–434. doi:10.1016/j.tics.2019.02.004. PMID 30876729. S2CID 75140112.
  5. ^ a b c Keogh, R; et, al (2020). "Cortical excitability controls the strength of mental imagery". eLife. 9: e50232. doi:10.7554/eLife.50232. PMC 7200162. PMID 32369016. S2CID 88893772.
  6. ^ Gilbert, CD; et, al (2007). "Brain states: Top-down influences in sensory processing". Neuron. 54 (5): 677–696. doi:10.1016/j.neuron.2007.05.019. PMID 17553419. S2CID 7662993.
  7. ^ Bressler, SL; et, al (2015). "Interareal oscillatory synchronization in top-down neocortical processing". Curr. Opin. Neurobiol. 31: 62–66. doi:10.1016/j.conb.2014.08.010. PMID 25217807. S2CID 35672746.
  8. ^ Kajal, DS; et, al (2020). "Involvement of top-down networks in the perception of facial emotions: A magnetoencephalographic investigation". NeuroImage. 222 (117075): 10.1016/j.neuroimage.2020.117075. doi:10.1016/j.neuroimage.2020.117075. PMID 32585348. S2CID 219958332.
  9. ^ Kirchberger, L; et, al (2021). "The essential role of recurrent processing for figure-ground perception in mice". Sci Adv. 7 (27): eabe1833. Bibcode:2021SciA....7.1833K. doi:10.1126/sciadv.abe1833. PMC 8245045. PMID 34193411.
  10. ^ Brodziak, A; et, al (2013). "Clinical significance of knowledge about the structure, function, and impairments of working memory". Med. Sci. Monit. 9: 327–338. doi:10.12659/MSM.883900. PMC 3659070. PMID 23645218.
  11. ^ Rosenblatt, F (1958). "The perceptron: a probabilistic model for information storage and organization in the brain. 3602029". Psychol Rev. 65 (6): 386–408. doi:10.1037/h0042519. PMID 13602029. S2CID 12781225.
  12. ^ Rumelhart, DE; Hinton, GE; Williams, RJ (1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. S2CID 205001834.
  13. ^ Proudfoot, M; et, al (2014). "Magnetoencephalograph". Pract Neurol. 14 (5): 336–43. doi:10.1136/practneurol-2013-000768. PMC 4174130. PMID 24647614.
  14. ^ Gross, J (2019). "Magnetoencephalography in Cognitive Neuroscience: A Primer". Neuron. 104 (2): 189–204. doi:10.1016/j.neuron.2019.07.001. PMID 31647893. S2CID 204837789.
  15. ^ Hales, CG (2014). "The origins of the brain's endogenous electromagnetic field and its relationship to provision of consciousness". J Integr Neurosci. 13 (2): 313–61. doi:10.1142/S0219635214400056. PMID 25012714.
  16. ^ a b c d e f McFadden, J (2020). "Integrating information in the brain's EM field: The cemi field theory of consciousness". Neurosci Conscious. 2020 (1): niaa016. doi:10.1093/nc/niaa016. PMC 7507405. PMID 32995043.
  17. ^ a b c McFadden, J (2023). "Consciousness: Matter or EMF?". Front Hum Neurosci. 18 (16): 1024934. doi:10.3389/fnhum.2022.1024934. PMC 9889563. PMID 36741784.
  18. ^ Hales, CG; Ericson, M (2022). "Electromagnetism's Bridge Across the Explanatory Gap: How a Neuroscience/Physics Collaboration Delivers Explanation Into All Theories of Consciousness". Front Hum Neurosci. 16 (836046): 836046. doi:10.3389/fnhum.2022.836046. PMC 9245352. PMID 35782039.
  19. ^ MacIver, MB (2022). "Consciousness and inward electromagnetic field interactions". Front Hum Neurosci. 16 (1032339). doi:10.3389/fnhum.2022.1032339. PMC 9714613. PMID 36466618.
  20. ^ LeCun, Y; et, al (2015). "Deep learning". Nature. 521 (7553): 436–44. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442. S2CID 3074096.
  21. ^ Heaton, J; Goodfellow, I; et, al (2018). "Deep learning". Genet Program Evolvable Mach. 19: 305–307. doi:10.1007/s10710-017-9314-z. S2CID 4300434.
  22. ^ Webb, M; et, al (2021). "Machine learning for human learners: opportunities, issues, tensions and threats". Educational Technology Research and Development. 69 (2): 2109–2130. doi:10.1007/s11423-020-09858-2. S2CID 228850326.
  23. ^ a b c Rożyk-Myrta, A; Brodziak, A; Muc-Wierzgoń, M (2021). "Neural circuits, microtubule processing, brain's electromagnetic field - components of self-awareness". Brain Sciences. 11 (8): 984. doi:10.3390/brainsci11080984. PMC 8393322. PMID 34439603. This article incorporates text from this source, which is available under the CC BY 4.0 license.
  24. ^ Tononi, G; Koch, C (2015). "Consciousness: here, there and everywhere?". Philos Trans R Soc Lond B Biol Sci. 370 (1668): 20140167. doi:10.1098/rstb.2014.0167. PMC 4387509. PMID 25823865.
  25. ^ Dehaene, S; Changeux, JP (2011). "Experimental and theoretical approaches to conscious processing". Neuron. 70 (2): 200–27. doi:10.1016/j.neuron.2011.03.018. PMID 21521609. S2CID 14535322.
  26. ^ Ruan, Z (2023). "The fundamental challenge of a future theory of consciousness". Front Psychol. 13 (1029105). doi:10.3389/fpsyg.2022.1029105. PMC 9878380. PMID 36710768.
  27. ^ Hildt (2019). "Artificial Intelligence: Does Consciousness Matter?". Front Psychol. 10 (1535): 1535. doi:10.3389/fpsyg.2019.01535. PMC 6614488. PMID 31312167.
  28. ^ Pepperell, R (2022). "Does Machine Understanding Require Consciousness?". Front Syst Neurosci. 16 (788486): 788486. doi:10.3389/fnsys.2022.788486. PMC 9159796. PMID 35664685.

External links[edit]

[1] Galus W, Starzyk J. (2021). Reductive Model of the Conscious Mind. 2021, IGI Global Publisher of Timely Knowledge. [1]

[2] Imaginacion al poder (2004) - One of the early citations on the importance of recurent axons