User:YozoraNoKen/sandbox/Neuromimetic Intelligence

From Wikipedia, the free encyclopedia

Neuromimetic intelligence is a specialized branch of artificial intelligence (AI) focused on replicating and simulating the intricate functioning of the human nervous system and brain. Its primary goal is to achieve cognitive capabilities in AI systems that closely resemble those of humans, enabling them to perceive, learn, and make decisions in ways that mimic human cognition.

Distinct from computational neuroscience, which primarily seeks to understand brain function through mathematical and computational models, neuromimetic intelligence takes a transformative step forward. It not only seeks to comprehend the brain's inner workings but applies these insights to create AI systems capable of human-like cognition.[1]

Neuromimetic intelligence research extends its reach into areas like neuromorphic computing. In the realm of Neuromorphic computing, specialized hardware is designed to emulate the brain's neural architecture. Neuromorphic chips and systems aim to accelerate the computation used and required for neuromimetic AI. These advancements hold the promise of achieving faster and more energy-efficient simulations of human-like cognitive processes.[2]

Understanding the significance of neuromimetic intelligence in the broader landscape of AI and neuroscience is crucial. In this article, we will explore the applications, challenges, and ongoing research efforts in neuromimetic intelligence, shedding light on its potential to bridge the gap between artificial intelligence and human cognition.

History[edit]

1940s-1950s - Early Neural Network Models:[edit]

The roots of Neuromimetic Intelligence can be traced back to the 1940s and 1950s when researchers like Warren McCulloch and Walter Pitts developed the first mathematical models of artificial neurons. They laid the foundation for artificial neural networks (ANNs) by demonstrating that simple computational elements could be connected in networks to perform complex tasks.[3]

1960s-1970s - Perceptrons and Early AI:[edit]

In the 1960s, Frank Rosenblatt developed the perceptron, an early form of a neural network capable of supervised learning. This sparked enthusiasm for using ANNs in artificial intelligence (AI) research. However, limitations in perceptron architectures and a lack of computational power hindered progress.[4][5]

1980s-1990s - Neural Network Resurgence:[edit]

The field of neural networks experienced a resurgence in the 1980s and 1990s. Researchers developed new learning algorithms, including backpropagation, which allowed for more efficient training of multi-layered ANNs. This period saw ANNs applied to various tasks, including image and speech recognition.

1990s-2000s - Connectionism and Cognitive Science:[edit]

The study of neural networks became closely linked with connectionism, a cognitive science framework that emphasized the importance of neural network models for understanding human cognition. Researchers explored how neural networks could simulate aspects of human intelligence.[6]

2000s-Present - Deep Learning and Neuromorphic Hardware:[edit]

The 2000s marked a significant turning point with the development of deep learning techniques. Deep neural networks (DNNs) with many layers demonstrated remarkable capabilities in image, speech, and natural language processing. This led to breakthroughs in AI applications and the resurgence of interest in Neuromimetic Intelligence.[7]

Neuromorphic hardware, designed to emulate the parallelism and efficiency of the brain, began to gain attention. Projects like SpiNNaker and IBM's TrueNorth explored hardware solutions that could accelerate neuromorphic computing and enable real-time, energy-efficient AI processing.[8]

Recent Advances:[edit]

In recent years, Neuromimetic Intelligence has continued to evolve rapidly. Researchers have developed spiking neural networks (SNNs), which more closely model the temporal dynamics of biological neurons. These SNNs are used in neuromorphic hardware and event-driven AI systems.[9]

The field has expanded to include applications in robotics, brain-computer interfaces (BCIs), cognitive architectures, and neuroinformatics, showcasing its interdisciplinary nature.

Overall, Neuromimetic Intelligence has a long and storied history marked by periods of exploration, innovation, and breakthroughs. It represents a convergence of neuroscience, computer science, and cognitive science, with the goal of achieving AI systems that emulate the remarkable information processing capabilities of the human brain.

Biological Inspiration[edit]

Neural Networks[edit]

Biological neural networks serve as the foundational inspiration for the development of neuromimetic intelligence. These networks are the fundamental building blocks of the human nervous system and brain, and they play a pivotal role in how information is processed and transmitted within our bodies.

Structure of Neurons:[edit]

Neurons, also known as nerve cells, are the basic functional units of the nervous system. They are highly specialized cells designed for information processing. Neurons consist of three main components:[10]

  • Dendrites: Branched extensions that reach out to receive signals from other neurons or sensory receptors. They serve as input points for incoming information.
  • Cell Body (Soma): The cell body integrates incoming signals from dendrites and makes crucial decisions about whether to generate an output signal.
  • Axon: A long, slender projection that serves as the output channel for neurons. When an action potential is generated, it travels along the axon to transmit information to other neurons or cells.

Synaptic Communication:[edit]

Neurons communicate with each other at specialized junctions known as synapses. The process of communication involves a combination of electrical and chemical signaling:[11]

  • When an action potential reaches the end of an axon, it triggers the release of neurotransmitters, which are chemical messengers.
  • These neurotransmitters traverse the synaptic gap and bind to receptors on the dendrites or cell body of the receiving neuron.
  • This binding initiates a new action potential in the receiving neuron if the signal is sufficiently strong.

Information Processing:[edit]

Information processing in biological neural networks relies on the strength of synapses, influenced by the frequency and intensity of signals. Neurons have the remarkable ability to integrate signals from multiple sources, allowing for complex decision-making based on this integration.

Learning and Adaptation:[edit]

Synaptic plasticity is a key feature of biological neural networks. It enables the network to adapt and learn from new information. Repeated communication between neurons can lead to the strengthening (long-term potentiation) or weakening (long-term depression) of synapses, depending on the timing and intensity of signals.[12]

Comparisons to Artificial Neural Networks (ANNs):[edit]

It is noteworthy that artificial neural networks (ANNs) in neuromimetic intelligence draw inspiration from the structure and functioning of biological neural networks. ANNs use artificial neurons and synapses to simulate information processing and learning, mirroring the mechanisms found in their biological counterparts.

Artificial Neural Networks[edit]

One key aspect of neuromimetic intelligence is the development of artificial neural networks (ANNs). ANNs are computational models inspired by the structure and functioning of biological neural networks. These networks consist of interconnected nodes (neurons) organized into layers, each layer playing a specific role in information processing and decision-making.[13][14]

Neurons in Artificial Neural Networks (Artificial Neurons):[edit]

In ANNs, the basic computational units are artificial neurons or nodes, which are analogous to biological neurons. Each artificial neuron receives one or more inputs, performs a mathematical operation on them, and produces an output. This operation often involves a weighted sum of the inputs followed by the application of an activation function.

Connections (Synapses) in ANNs:[edit]

The connections between artificial neurons in ANNs are akin to synapses in biological neural networks. Each connection between artificial neurons is assigned a numerical value known as a "weight." This weight quantifies the strength or influence of the connection. In essence, it represents how much one neuron's output affects another neuron's input. The input from the sending neuron is multiplied by its weight before passing it to the receiving neuron. The adjustment of these weights during training is fundamental for ANNs to learn and adapt.

Layers in ANNs:[edit]

ANNs are typically structured into layers, each with a specific function:

  • Input Layer: The input layer receives the initial data or features for processing. Each neuron in this layer corresponds to an input feature.
  • Hidden Layers: These intermediate layers, if present, perform complex transformations on the input data. Hidden layers are where most of the computation occurs and are responsible for the network's ability to learn complex patterns.
  • Output Layer: The output layer produces the final result of the network's computation, often in a format suitable for the specific task (e.g., class probabilities in classification tasks).

Weighted Sum and Activation Function:[edit]

Each artificial neuron calculates a weighted sum of its inputs. This sum is computed as the dot product of the input values and their corresponding weights, followed by the addition of a bias term. The result of this weighted sum is then passed through an activation function, which introduces non-linearity into the model. Common activation functions include the sigmoid, ReLU (Rectified Linear Unit), and tanh (hyperbolic tangent).[15]

Forward Propagation:[edit]

The process of computing the output of an ANN given an input is known as forward propagation. It starts at the input layer, where input values are passed through connections to neurons in the hidden layers and finally to the output layer. At each neuron, a weighted sum is computed, followed by the application of the corresponding activation function. This process continues layer by layer until the final output is obtained.

Learning and Training:[edit]

ANNs are trained using a supervised learning process. During training, the network is presented with a dataset containing input-output pairs (training examples). The network's predictions are compared to the actual target outputs, and an error (or loss) is computed to quantify the difference. Optimization algorithms like gradient descent are then used to adjust the weights and biases of connections to minimize this error. Training iterates over the dataset multiple times (epochs) until the model converges to a state where it makes accurate predictions on unseen data.

Backpropagation:[edit]

Backpropagation is a fundamental algorithm for training ANNs. It computes the gradient of the loss function with respect to the network's weights. This gradient information is used to update the weights in a way that minimizes the loss, effectively teaching the network to make better predictions. Backpropagation involves propagating the error backward through the network, adjusting the weights in each layer.

Types of ANNs:[edit]

ANNs come in various forms, including:[16]

  • Feedforward Neural Networks (FNNs): Information flows in one direction, from input to output, without loops or feedback.
  • Recurrent Neural Networks (RNNs): These networks have connections that loop back on themselves, allowing them to handle sequences and time-dependent data.
  • Convolutional Neural Networks (CNNs): CNNs are specialized for processing grid-like data, such as images, using convolutional layers to capture local patterns.
  • Spiking Neural Networks (SNNs): Modeled after the firing of spikes (action potentials) in biological neurons, SNNs are used in neuromorphic computing and spiking neuron research.

Applications of ANNs:[edit]

ANNs have found applications in diverse fields, including:

  • Image and speech recognition
  • Natural language processing
  • Autonomous vehicles
  • Recommendation systems
  • Drug discovery and healthcare
  • Financial forecasting
  • Game playing (e.g., AlphaGo)

Neuromorphic Computing[edit]

Neuromorphic computing serves as a crucial foundation within the domain of neuromimetic intelligence, providing the indispensable hardware infrastructure necessary for the efficient implementation of neuromimetic AI algorithms. This burgeoning field represents a convergence of principles from traditional computing and insights drawn from the workings of the human brain.[2]

Key Components of Neuromorphic Hardware:[edit]

Neuromorphic hardware platforms are characterized by their unique components, including specialized neuromorphic chips and hardware architectures. These components are meticulously designed to replicate the neural structure and functioning of the human brain. Event-based computing, which mimics the spiking activity of neurons, is a fundamental feature of such platforms.

Efficiency and Energy Considerations:[edit]

One of the primary advantages of neuromorphic computing lies in its remarkable efficiency and low energy consumption. Unlike traditional computing architectures, neuromorphic hardware is tailored to operate with minimal power requirements. This energy-efficient design makes it particularly well-suited for applications demanding real-time processing and autonomy, such as robotics and edge computing.

Parallelism and Real-time Processing[17]:[edit]

Neuromorphic computing harnesses the principles of parallelism, enabling it to execute computations in a highly parallel and real-time fashion. This inherent parallelism is essential for tasks that necessitate rapid decision-making and sensory processing, reflecting the capabilities of the human brain.

Applications:[edit]

Neuromorphic computing finds applications across a diverse range of fields. In the context of neuromimetic intelligence, researchers leverage neuromorphic hardware platforms to construct and evaluate AI models that replicate human-like cognition. These applications extend to domains such as autonomous robotics, medical diagnostics, and natural language understanding.

Comparison with Traditional Computing:[edit]

In contrast to traditional von Neumann computing architectures, neuromorphic computing stands out for its fundamentally different approach to information processing. While traditional computers rely on sequential processing, neuromorphic hardware emulates the massively parallel and event-driven nature of the human brain.

Applications[edit]

Applications of neuromimetic intelligence span various domains. In healthcare, it can be used for advanced medical image analysis, diagnosis, and drug discovery. In robotics, neuromimetic AI can enhance the capabilities of autonomous systems by enabling them to perceive and interact with their environment in a more human-like manner. Additionally, it has significant potential in natural language processing, where understanding context, semantics, and subtleties is crucial for meaningful interactions.[18]

Brain-Computer Interfaces (BCIs)[edit]

BCIs are an important application area where neuromorphic computing can be beneficial. BCIs allow direct communication between the brain and external devices. Neuromorphic hardware can be integrated into BCIs to process and interpret neural signals in real-time, enabling applications such as assistive technology for individuals with disabilities, neuroprosthetics, and cognitive enhancements.[19]

Neuroinformatics[edit]

Neuromimetic intelligence plays a crucial role in neuroinformatics by providing advanced AI techniques and models that help make sense of the vast and complex datasets generated in neuroscience research. By mimicking the brain's information processing mechanisms, neuromimetic AI contributes to our understanding of the nervous system, enhances data analysis capabilities, and supports discoveries that have the potential to advance our knowledge of brain function and neurological disorders.

Research and Ongoing Developments[edit]

IBM TrueNorth and SpiNNaker stand as notable examples of neuromorphic computing platforms that have significantly advanced the field of neuromimetic intelligence.

IBM TrueNorth[edit]

IBM's TrueNorth project represents a groundbreaking neuromorphic hardware platform engineered to emulate the intricate structure and functions of the human brain. It's been used in various research applications, including image recognition and sensory processing.[20]

Key Achievements:[edit]

TrueNorth has achieved notable milestones, including its ability to simulate neural networks with remarkable efficiency. It has demonstrated remarkable potential in various research applications, including image recognition and sensory processing.

Applications and Use Cases:[edit]

TrueNorth has found practical utility in diverse applications, such as image analysis and object recognition, where its brain-inspired architecture has allowed it to excel in tasks that demand rapid and efficient sensory processing.

Collaborations and Partnerships:[edit]

IBM TrueNorth's success has often been the result of collaborations with leading research institutions, fostering a collaborative environment in the field of neuromimetic intelligence.

SpiNNaker[edit]

SpiNNaker, or Spiking Neural Network Architecture, is another noteworthy neuromorphic computing platform developed by researchers in the UK. This platform is designed with a focus on modeling large-scale spiking neural networks and executing brain-like computations in real-time.[21]

Key Achievements:[edit]

SpiNNaker has demonstrated exceptional capabilities in simulating large-scale neural networks, opening the door to the study of complex brain-like computations. Its efficient, event-driven architecture has enabled real-time neural simulations.

Applications and Use Cases:[edit]

SpiNNaker has been applied to various applications, including neuroscientific research and robotics, where its ability to process neural information in real-time has proven invaluable.

Impact on the Field:[edit]

Both IBM TrueNorth and SpiNNaker have significantly impacted the field of neuromimetic intelligence. Their success has not only broadened our understanding of the brain but has also paved the way for the development of new neuromorphic technologies and methodologies.

Challenges and Limitations[edit]

While neuromimetic intelligence holds great promise in replicating human-like cognition and decision-making, it also faces several challenges and limitations that researchers and developers must address:[22][23][24][25]

Complexity of Biological Systems:[edit]

The human brain is an incredibly complex and intricate organ with billions of neurons and trillions of synapses. Emulating its functionality in artificial systems is a monumental task. Capturing the full complexity of biological neural networks remains a significant challenge.

Scalability:[edit]

Scaling neuromimetic intelligence to match the capabilities of the human brain is a daunting endeavor. Current hardware and software limitations make it difficult to create AI systems that can handle the vast amounts of data and computations required for human-level cognition.

Data Requirements:[edit]

Training neuromimetic AI models often demands large datasets, sometimes beyond what is readily available. Gathering and annotating extensive datasets for training can be resource-intensive and time-consuming.

Hardware Constraints:[edit]

Building neuromorphic hardware that can effectively replicate the efficiency and parallelism of biological neural networks remains a technological challenge. While progress has been made, hardware limitations, such as power consumption and processing speed, must be addressed to achieve practical applications.

Interpretablility and Explainability:[edit]

As neuromimetic AI models become more complex, they often become less interpretable and challenging to explain. Understanding the decision-making processes of these models is crucial, especially in applications like healthcare and autonomous systems.

Ethical and Privacy Concerns:[edit]

As AI systems become more advanced, ethical considerations and concerns about privacy become increasingly important. Ensuring that neuromimetic AI systems make ethical decisions and protect sensitive information is a complex challenge.

Generalization and Transfer Learning:[edit]

Achieving the ability to generalize knowledge from one domain to another, as humans do, remains an ongoing challenge in neuromimetic intelligence. Improving transfer learning capabilities is essential for creating versatile AI systems.

Computational Resources:[edit]

Developing neuromimetic AI models often requires significant computational resources, making it inaccessible to many researchers and organizations. Access to high-performance computing infrastructure is crucial for advancing the field.

Integration with Traditional AI:[edit]

Neuromimetic intelligence must be seamlessly integrated with traditional AI and machine learning approaches to maximize its potential. Finding effective ways to combine the strengths of different AI paradigms is a complex problem.

Long-Term Learning and Adaptation:[edit]

Achieving lifelong learning and adaptation, similar to human learning over a lifetime, is a challenging problem. AI systems that can continually learn and adapt to new information are still in the early stages of development.

Neuroethics and Governance:[edit]

Addressing the ethical implications of neuromimetic intelligence, such as questions about consciousness, rights, and responsibilities, requires careful consideration and governance frameworks to ensure responsible development and use.

Notable Researchers and Organizations[edit]

  1. IBM Research: IBM has been a trailblazer in neuromimetic intelligence through its groundbreaking TrueNorth project. The TrueNorth neuromorphic hardware platform, designed to emulate the structure and function of the human brain, has been instrumental in advancing the capabilities of neuromorphic computing.
  2. SpiNNaker Project: The SpiNNaker (Spiking Neural Network Architecture) project, led by researchers in the UK, is another pioneering effort in the field of neuromimetic intelligence. SpiNNaker is designed to model large-scale spiking neural networks in real-time, enabling brain-like computations and applications.
  3. HRL Laboratories: HRL Laboratories, a research institution specializing in applied research, has been actively involved in neuromorphic computing and AI research. Their work includes the development of novel neuromorphic hardware architectures and algorithms.
  4. Stanford NeuroAI: Researchers at Stanford University's NeuroAI Lab are at the forefront of neuromimetic intelligence research. They explore the intersection of neuroscience and AI, aiming to replicate human cognitive abilities in AI systems.
  5. ETH Zurich's Institute of Neuroinformatics: ETH Zurich's Institute of Neuroinformatics is a leader in the development of neuromorphic hardware and computational neuroscience. Their work focuses on creating hardware that mimics the brain's structure and function, fostering advancements in neuromimetic intelligence.
  6. University of Manchester's SpiNNaker Team: Researchers at the University of Manchester have been instrumental in the development of the SpiNNaker neuromorphic computing platform. Their work has opened new possibilities for real-time, brain-inspired computing.
  7. Neuromorphic Engineering and Robotics Group (NERG): NERG, based at the University of Zurich, conducts research in neuromorphic engineering and robotics. They explore the applications of neuromorphic intelligence in robotics and perception systems.
  8. Caltech's Computation and Neural Systems Program: The Computation and Neural Systems Program at the California Institute of Technology conducts cutting-edge research at the intersection of neuroscience and AI, contributing to the development of neuromimetic intelligence.
  9. IBM Research - Almaden: IBM's Almaden Research Center in California is involved in neuromimetic intelligence research, focusing on the creation of innovative hardware and software solutions for AI and neuromorphic computing.
  10. University of Heidelberg's Kirchhoff Institute for Physics: Researchers at the Kirchhoff Institute for Physics are engaged in neuromorphic computing and computational neuroscience, striving to bridge the gap between artificial and biological intelligence.

References[edit]

  1. ^ Kriegeskorte, Nikolaus; Douglas, Pamela K. (2018-09). "Cognitive computational neuroscience". Nature Neuroscience. 21 (9): 1148–1160. doi:10.1038/s41593-018-0210-5. ISSN 1546-1726. {{cite journal}}: Check date values in: |date= (help)
  2. ^ a b Marković, Danijela; Mizrahi, Alice; Querlioz, Damien; Grollier, Julie (2020-09). "Physics for neuromorphic computing". Nature Reviews Physics. 2 (9): 499–510. doi:10.1038/s42254-020-0208-2. ISSN 2522-5820. {{cite journal}}: Check date values in: |date= (help)
  3. ^ Arbib, Michael A (2000). "Warren McCulloch's Search for the Logic of the Nervous System". Perspectives in Biology and Medicine. 43 (2): 193–216. ISSN 1529-8795.
  4. ^ Rosenblatt, F. (1958). "The perceptron: A probabilistic model for information storage and organization in the brain". Psychological Review. 65 (6): 386–408. doi:10.1037/h0042519. ISSN 1939-1471.
  5. ^ Minsky, Marvin; Papert, Seymour A. (2017-09-22). Perceptrons, Reissue of the 1988 Expanded Edition with a new foreword by Léon Bottou: An Introduction to Computational Geometry. MIT Press. ISBN 978-0-262-53477-2.
  6. ^ Stillings, Neil A. (1995). Cognitive Science: An Introduction. MIT Press. ISBN 978-0-262-69175-8.
  7. ^ LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015-05). "Deep learning". Nature. 521 (7553): 436–444. doi:10.1038/nature14539. ISSN 1476-4687. {{cite journal}}: Check date values in: |date= (help)
  8. ^ Hasler, Jennifer; Marr, Harry (2013). "Finding a roadmap to achieve large neuromorphic hardware systems". Frontiers in Neuroscience. 7. doi:10.3389/fnins.2013.00118/full. ISSN 1662-453X.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  9. ^ Tavanaei, Amirhossein; Ghodrati, Masoud; Kheradpisheh, Saeed Reza; Masquelier, Timothée; Maida, Anthony (2019-03-01). "Deep learning in spiking neural networks". Neural Networks. 111: 47–63. doi:10.1016/j.neunet.2018.12.002. ISSN 0893-6080.
  10. ^ Ludwig, Parker E.; Reddy, Vamsi; Varacallo, Matthew (2023), "Neuroanatomy, Neurons", StatPearls, Treasure Island (FL): StatPearls Publishing, PMID 28723006, retrieved 2023-11-09
  11. ^ Eccles, John Carew (2013-10-22). The Physiology of Synapses. Academic Press. ISBN 978-1-4832-2606-4.
  12. ^ Abbott, L. F.; Nelson, Sacha B. (2000-11). "Synaptic plasticity: taming the beast". Nature Neuroscience. 3 (11): 1178–1183. doi:10.1038/81453. ISSN 1546-1726. {{cite journal}}: Check date values in: |date= (help)
  13. ^ YEGNANARAYANA, B. (2009-01-14). ARTIFICIAL NEURAL NETWORKS. PHI Learning Pvt. Ltd. ISBN 978-81-203-1253-1.
  14. ^ Yang, Guangyu Robert; Wang, Xiao-Jing (2021-02-17). "Artificial neural networks for neuroscientists: a primer". Neuron. 109 (4): 739. doi:10.1016/j.neuron.2021.01.022. ISSN 0896-6273.
  15. ^ "Activation functions and their characteristics in deep neural networks". ieeexplore.ieee.org. Retrieved 2023-11-09.
  16. ^ "Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs". ieeexplore.ieee.org. Retrieved 2023-11-09.
  17. ^ "Scalable parallel computers for real-time signal processing". ieeexplore.ieee.org. Retrieved 2023-11-09.
  18. ^ "Development and Applications of Biomimetic Neuronal Networks Toward BrainMorphic Artificial Intelligence". ieeexplore.ieee.org. Retrieved 2023-11-09.
  19. ^ Nicolas-Alonso, Luis Fernando; Gomez-Gil, Jaime (2012-02). "Brain Computer Interfaces, a Review". Sensors. 12 (2): 1211–1279. doi:10.3390/s120201211. ISSN 1424-8220. {{cite journal}}: Check date values in: |date= (help)CS1 maint: unflagged free DOI (link)
  20. ^ Akopyan, Filipp; Sawada, Jun; Cassidy, Andrew S.; Alvarez-Icaza, Rodrigo; Arthur, John V.; Merolla, Paul; Imam, Nabil; Nakamura, Yutaka; Datta, Pallab; Nam, Gi Joon; Taba, Brian; Beakes, Michael P.; Brezzo, Bernard; Kuang, Jente B.; Manohar, Rajit (2015-10-01). "TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip". IEEE TCADIS. doi:10.1109/TCAD.2015.2474396.
  21. ^ publishers, now (2020-03-31). "SpiNNaker: A Spiking Neural Network Architecture". {{cite journal}}: Cite journal requires |journal= (help)
  22. ^ Akil, Huda; Martone, Maryann E.; Van Essen, David C. (2011-02-11). "Challenges and Opportunities in Mining Neuroscience Data". Science. 331 (6018): 708–712. doi:10.1126/science.1199305. ISSN 0036-8075. PMC 3102049. PMID 21311009.{{cite journal}}: CS1 maint: PMC format (link)
  23. ^ Fuchs, Thomas (2003-02-26). "The Challenge of Neuroscience: Psychiatry and Phenomenology Today". Psychopathology. 35 (6): 319–326. doi:10.1159/000068593. ISSN 0254-4962.
  24. ^ Illes, Judy; Bird, Stephanie J. (2006-09). "Neuroethics: a modern context for ethics in neuroscience". Trends in Neurosciences. 29 (9): 511–517. doi:10.1016/j.tins.2006.07.002. ISSN 0166-2236. PMC 1656950. PMID 16859760. {{cite journal}}: Check date values in: |date= (help)CS1 maint: PMC format (link)
  25. ^ Farah, Martha J. (2005-01). "Neuroethics: the practical and the philosophical". Trends in Cognitive Sciences. 9 (1): 34–40. doi:10.1016/j.tics.2004.12.001. ISSN 1364-6613. {{cite journal}}: Check date values in: |date= (help)

External links[edit]