Talk:Neuro-linguistic programming/Research2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is a summary on neurological research relevant to NLP, from the paper "Putting the 'neuro' back into neuro-linguistic programming" (Bolstad, 2003). [1] (PDF)

The map is not the territory[edit]

Everything we experience of the world comes to us through the neurological channels of our sensory systems. There is no direct connection between the sense organ (the retina of the eyes, for example) and the specialised brain area which handles that sense.... Only 20% of the flow of information into the lateral geniculate body comes from the eyes. Most of the data that will be organised as seeing comes from areas such as the hypothalamus, a mid-brain centre which has a key role in the creation of emotion (Maturana and Varela, 1992, p 162). What we "see" is as much a result of the emotional state we are in as of what is in front of our eyes.

Because the brain is a system with feedback loops, this process goes both ways. What we see is affected by our emotions, and it also shapes those emotions.

Emotional information altering the perception of colour is actually fed into the visual system at the lateral geniculate body, as mentioned above. The area of the visual cortex which makes final colour decisions is very precisely located. If this area of the brain is damaged in a stroke, then the person will suddenly see everything in black and white (acquired cerebral achromatopsia). At times a person will find that damage results in one side of their vision being coloured and one side being "black and white" (Sacks, 1995, p 152). Full colour experience [is] an illusion; but it is the same illusion that our brain performs at every moment (Sacks, 1995, p 156, based upon a demonstration by Edwin Land, inventor of the Polaroid instant camera, 1957). That is to say, the colours you are seeing right now are not the colours out here in the world; they are the colours your brain makes up.

[In another experiment] a picture of a white face was sent to one eye, and that of a black face to the other eye, at the same time. Both English speaking South Africans and "coloured" South Africans reported seeing a face. But Afrikaners tested could not see the face. They saw nothing! At a level deeper than the conscious mind, they could not fuse a black face and a white face (Pettigrew et alia, 1976).

Primary representation System[edit]

A. Luria identified the separate areas associated with vision, hearing, sensory-motor activity, and speech (the latter isolated on the dominant hemisphere of the brain) as early as 1966. By the time NLP emerged in the 1960s, then, researchers already understood that each sensory system had a specialised brain area, and that people had [variable] preferences for using particular sensory systems. Robert Dilts (1983, section 3, p 1-29) showed that different brain wave (EEG) patterns were associated with visual, auditory, tactile and visceral thought.

The claim that which sensory system you talk in makes a difference to your results with specific clients was tested by Yapko (1981). He worked with 30 graduate students in counselling, and had them listen to three separate taped trance inductions. Each induction used language from one of the main three sensory systems (visual, auditory and kinesthetic). Subjects were assessed before to identify their preference for words from these sensory systems. After each induction, their depth of trance was measured by electromyograph and by asking them how relaxed they felt. On both measures, subjects achieved greater relaxation when their preferred sensory system was used.

Neurological basis of submodalities[edit]

Colour is one of the first fourteen visual submodalities listed by Richard Bandler (1985, p 24). (The others were distance, depth, duration, clarity, contrast, scope, movement, speed, hue, transparency, aspect ratio, orientation, and foreground/background)

[T]here are cells which respond only to the submodality of motion. These cells were found in the prestriate visual cortex of monkeys’ brains in the early 1970s. When the monkey watched a moving object, the motion cells were activated as soon as movement began. In 1983, the first clinical cases were found of people with these specific cells damaged, resulting in central motion blindness (akinetopsia). A person with akinetopsia can see a car while it is still, but once the car moves, they see it disappear and reappear somewhere else. They see life as a series of still photos (Sacks, 1995, p 181).

Neurologically speaking, size, motion and colour are specialised functions... Many other such functions have been neurologically identified, including brightness, orientation (the tilt of the picture), and binocular disparity (depth and distance).

The first research on the neurological basis of visual submodalities was done by David Hubel and Torsten Wiesel in the 1950s and 1960s. They showed that even these core submodality distinctions are a learned result of interaction with the environment. We are not born able to discriminate colour, for example. If we lived in a world with no blues, it is possible that the ability to "see" blue would not develop. (Kalat, 1988, p 191-194)

The submodality of orientation was tested by Blakemore and Cooper (1970). Newborn cats were brought up in an environment where they could only see horizontal lines. The area of the cortex which discriminates vertical lines simply did not develop in these cats, as demonstrated by checking with electrodes, and by the cats’ tendency to walk straight into chair legs. Similarly, cats raised where they could only see vertical lines were unable to see horizontal objects, and would walk straight into a horizontal bar. These inabilities were still present months later, suggesting that a critical phase for the development of those particular areas of the brain may have passed.

This relationship between submodalities and the "feeling" of an experience is likely to be the neurological basis of some important NLP processes, called submodality shifts.

Meta-analysis of sensory information[edit]

From the visual cortex, messages go on to areas where even more complex meta-analysis occurs, in the temporal cortex and parietal cortex.

There is an area of the temporal cortex which creates a sense of "familiarity" or "strangeness". When a person is looking at a picture, and has the "familiarity" area stimulated, they will report that they have suddenly "understood" or reinterpreted the experience. When they have the "strangeness" area stimulated, they report that something puzzling has occurred to them about the image. If you then explain to them "rationally" that the object is no more or less familiar than it was, they will argue for their new way of experiencing it. If stimulated, they will tell you that it really has changed! It feels changed!

The analysis done in the parietal cortex is even more curious. This area seems to decide whether what is seen is worth paying conscious attention to. For example, there are cells here which assess whether an apparent movement in the visual image is a result of the eyes themselves moving, or a result of the object moving. If it decides that the "movement" was just a result of your eyes moving, it ignores the movement (like the electronic image stabiliser on a video camera).

Interestingly, if one of these meta-analysis areas is stimulated electronically, the person will report that there have been changes in their basic submodalities. Researchers have found that if they stimulate the "familiarity" area, not only do people report that they get the feeling of familiarity, but they also see objects coming nearer or receding and other changes in the basic level submodalities (Cairns-Smith, p 168).

Remembered and Constructed Images Use The Same Pathways As Current Images[edit]

Edoardo Bisiach (1978) studied people with specific localised damage to a specific area of the posterior parietal cortex associated with "paying attention visually". When this area of the cortex is damaged on one side, a very interesting result occurs. The person will fail to pay attention to objects seen on the affected side of their visual field. This becomes obvious if you ask them to describe all the objects in the room they are sitting in. If the affected side is the left, for example, when they look across the room, they will describe to you all objects on the right of the room, but ignore everything on the left. They will be able to confirm that those objects are there on the left, if asked about them, but will otherwise not report them (Kalat, 1988, p 197; Miller, 1995, p 33-34).

Bisiach quickly discovered that this damage affected more than the person's current perception. For example, he asked one patient to imagine the view of the Piazza del Duomo in Milan, a sight this man had seen every day for some years before his illness. Bisiach had him imagine standing on the Cathedral steps and got him to describe everything that could be seen looking down from there. The man described only one half of what could be seen, while insisting that his recollection was complete. Bisiach then had him imagine the view from the opposite side of the piazza. He then fluently reported the other half of the details.

The man’s image of this remembered scene clearly used the same neural pathways as were used when he looked out at Dr Bisiach sitting across the room. Because those pathways were damaged, his remembered images were altered in the same way as any current image. In the same way, the depressed person can be asked to remember an enjoyable event from a time before she or he was depressed. However, the visual memory of the events is run through the current state of the person’s brain, and is distorted by this process, just as their current experience is distorted.

The successful artist Jonathon I suffered damage to his colour processing areas at age 65. After this a field of flowers appeared to him as "an unappealing assortment of greys". Worse, however, was his discovery that when he imagined or remembered flowers, these images were also only grey (Hoffman, 1998, p 108).

If we change the activity of the system for processing visual information, both current and remembered images and feelings derived from them are changed.

Cross-referencing between Modalities[edit]

Submodalities occur neurologically in every sense. For example, different kinesthetic receptors and different brain processing occur for pain, temperature, pressure, balance, vibration, movement of the skin, and movement of the skin hairs (Kalat, 1988, p 154-157).

Even in what NLP has called the auditory digital sense modality (language), there are structures similar to submodalities. For example, the class of linguistic structures called presuppositions, conjunctions, helper verbs, quantifiers and tense and number endings (words such as "and", "but", "if", "not", "being") are stored separately from nouns, which are stored separately from verbs. Broca’s aphasia (Kalat, 1988, p 134) is a condition where specific brain damage results in an ability to talk, but without the ability to use the first class of words (presuppositions etc). The person with this damage will be able to read "Two bee oar knot two bee" but unable to read the identical sounding "To be or not to be". If the person speaks sign language, their ability to make hand signs for these words will also be similarly impaired.

Changes in the visual submodalities are inseparably linked to changes in other modalities. Office workers in a room repainted blue are more likely to complain of the cold, even though the thermostat has not been touched. When the room is repainted yellow, they will believe it has warmed up, and will not complain even when the thermostat is actually set lower. (Podolsky, 1938).

The TOTE Strategy (or model)[edit]

The developers of NLP used the T.O.T.E. model to further explain how we sequence sensory representations. The "TOTE" was developed by neurology researchers George Miller, Eugene Galanter and Karl Pribram (1960), as a model to explain how complex behaviour occurred. The feeling of depression can be thought of as the result of repeatedly running this strategy, called "ruminating" by researchers (Seligman, 1997, p 82-83).

Miller, Gallanter and Pribram (1960) had recognised that the simple stimulus-response model of Pavlov could not account for the complexity of brain activity. Of course, neither can their more complex TOTE model. Any map is an inadequate description of the real territory. The TOTE model suggests that each action we take is a result of an orderly sequence A-B-C-D. In fact, as we go to run such a "strategy", we also respond to that strategy with other strategies.

To use another NLP term, we go "meta" (above or beyond) our original strategy. The developers of NLP noted that "A meta response is defined as a response about the step before it, rather than a continuation or reversal of the representation. These responses are more abstracted and disassociated from the representations preceding them. Getting feelings about the image (feeling that something may have been left out of the picture, for instance)... would constitute a meta response in our example." (Dilts et alia, 1980, p 90). Michael Hall has pointed out that such responses could be more usefully diagrammed using a second dimension (Hall, 1995, p 57). This emphasises that the TOTE model is only a model. Real neurological processes are more network-like (O’Connor and Van der Horst, 1994). Connections are being continuously made across levels, adding "meaning" to experiences. The advantage of the TOTE model is merely that it enables us to discuss the thought process in such a way as to make sense of it and enable someone to change it.

States[edit]

The NLP term "state", is defined by O’Connor and Seymour (1990, p 232) as "How you feel, your mood. The sum total of all neurological and physical processes within an individual at any moment in time. The state we are in affects our capabilities and interpretation of experience". A simple experiment demonstrates why this is not true. We can inject people with noradrenalin and their kinesthetic sensations will become aroused (their heart will beat faster etc). However, the emotional state they enter will vary depending on a number of other factors. They may, for example, become "angry", "frightened" or "euphoric". It depends on [...] what they tell themselves is happening, for example (Schachter and Singer, 1962). The same kinesthetic experience does not always result in the same state.

In most cases, what creates serious problems is not so much the fact that people enter such states. What creates disturbance is how people feel about feeling these states. Satir says "In other words, low self-worth has to do with what the individual communicates to himself about such feelings and the need to conceal rather than acknowledge them." (Satir and Baldwin, 1983, p 195). The person with high self esteem may feel sad when someone dies, but they also feel acceptance and even esteem for their sadness. The person with low self esteem may feel afraid or ashamed of their sadness.

Such "states about states" are generated by accessing one neural network (eg the network generating the state of acceptance) and "applying it" to the functioning of another neural network (eg the network generating the state of sadness). The result is a neural network which involves the interaction of two previous networks. Dr Michael Hall calls the resulting combinations "meta-states" (Hall, 1995), a term used within NLP for the same phenomenon. Our ability to generate meta-states gives richness to our emotional life. Feeling hurt when someone doesn't want to be with me is a primary level state that most people will experience at some time. If I feel angry about feeling hurt, then I create a meta-state (which we might call "aggrieved"). If I feel sad about feeling hurt, a completely different meta-state occurs (perhaps what we might call "self-pity"). If I feel compassionate about my hurt, the meta-state of "self-nurturing" may occur. Although in each case my initial emotional response is the same, the meta-state dramatically alters and determines the results for my life.

How Learning Affects The Brain[edit]

What does "learned" mean? The human brain itself is made up of about one hundred billion nerve cells or neurons. These cells organise themselves into networks to manage specific tasks. When any experience occurs in our life, new neural networks are laid down to record that event and its meaning. To create these networks, the neurons grow an array of new dendrites (connections to other neurons). Each neuron has up to 20,000 dendrites, connecting it simultaneously into perhaps hundreds of different neural networks. Steven Rose (1992) gives an example from his research with new-hatched chicks. After eating silver beads with a bitter coating, the chicks learn to avoid such beads. One peck is enough to cause the learning. Rose demonstrated that the chicks’ brain cells change instantly, growing 60% more dendrites in the next 15 minutes. These new connections occur in very specific areas.

California researcher Dr Marion Diamond (1988) and her Illinois colleague Dr William Greenough (1992) have demonstrated that rats in "enriched" environments grow 25% more dendrite connections than usual, as they lay down hundreds of new strategies. Autopsy studies on humans confirm the process. Graduate students have 40% more dendrite connections than high school dropouts, and those students who challenged themselves more had even higher scores (Jacobs et alia, 1993).

Neural Networks Are State Dependent[edit]

How do messages get from one neuron to another in the brain? The transmission of impulses between neurons and dendrites occurs via hundreds of precise chemicals called "information substances"; substances such as dopamine, noradrenaline (norepinephrine), and acetylcholine. These chemicals transmit messages across the "synapse" or gap between them. Without these chemicals, the strategy stored in the neural network cannot run. These chemicals are also the basis for what we are calling an emotional state, and they infuse not just the nervous system but the entire body, altering every body system. A considerable amount of research suggests that strong emotional states are useful to learning new strategies. J. O’Keefe and L. Nadel found (Jensen, 1995, p 38) that emotions enhance the brain’s ability to make cognitive maps of (understand and organise) new information. Dr James McGaugh, psychobiologist at UC Irvine notes that even injecting rats with a blend of emotion related hormones such as enkephalin and adrenaline means that the rats remember longer and better (Jensen, 1995, p 33-34). He says "We think these chemicals are memory fixatives. They signal the brain, 'This is important, keep this!' Emotions can and do enhance retention."

However there is another important effect of the emotional state on the strategies we run. The particular mixture of chemicals present when a neural network is laid down must be recreated for the neural network to be fully re- activated and for the strategy it holds to run as it originally did. If someone is angry, for example, when a particular new event happens, they have higher noradrenaline levels. Future events which result in higher noradrenaline levels will re-activate this neural network and the strategy they used then. As a result, the new event will be connected by dendrites to the previous one, and there will even be a tendency to confuse the new event with the previous one. If my childhood caregiver yelled at me and told me that I was stupid, I may have entered a state of fear, and stored that memory in a very important neural network. When someone else yells at me as an adult, if I access the same state of fear, I may feel as if I am re-experiencing the original event, and may even hear a voice telling me I’m stupid.

This is called "state dependent memory and learning" or SDML. Our memories and learnings, our strategies, are dependent on the state they are created in. "Neuronal networks may be defined in terms of the activation of specifically localised areas of neurons by information substances that reach them via diffusion through the extracellular fluid. In the simplest case, a 15-square mm neuronal network could be turned on or off by the presence or absence of a specific information substance. That is, the activity of this neuronal network would be "state-dependent" on the presence or absence of that information substance." (Rossi and Cheek, 1988, p 57).

Actually, all learning is state dependent, and examples of this phenomenon have been understood for a long time. When someone is drunk, their body is flooded with alcohol and its by-products. All experiences encoded at that time are encoded in a very different state to normal. If the difference is severe enough, they may not be able to access those memories at all, until they get drunk again.

At times, the neural networks laid down in one experience or set of experiences can be quite "cut off" (due to their different neuro-chemical basis) from the rest of the person's brain. New brain scanning techniques begin to give us more realistic images of how this actually looks. Psychiatrist Don Condie and neurobiologist Guochuan Tsai used a fMRI scanner to study the brain patterns of a woman with "multiple personality disorder". In this disorder, the woman switched regularly between her normal personality and an alter ego called "Guardian". The two personalities had separate memory systems and quite different strategies. The fMRI brain scan showed that each of these two personalities used different neural networks (different areas of the brain lit up when each personality emerged). If the woman only pretended to be a separate person, her brain continued to use her usual neural networks, but as soon as the "Guardian" actually took over her consciousness, it activated precise, different areas of the hippocampus and surrounding temporal cortex (brain areas associated with memory and emotion).(Adler, 1999, p 29-30)

Freud based much of his approach to therapy on the idea of "repression" and an internal struggle for control of memory and thinking strategies. This explanation of the existence of "unconscious" memories and motivations ("complexes") can now be expanded by the state dependent memory hypothesis. No internal struggle is needed to account for any of the previously described phenomena. The "complex" (in Freudian terms) can be considered as simply a series of strategies being run from a neural network which is not activated by the person’s usual chemical states. Rossi and Cheek note "This leads to the provocative insight that the entire history of depth psychology and psychoanalysis now can be understood as a prolonged clinical investigation of how dissociated or state-dependent memories remain active at unconscious levels, giving rise to the "complexes"... that are the source of psychological and psychosomatic problems." (Rossi and Cheek, 1988, p 57).

Dr Lewis Baxter (1994) showed that clients with obsessive compulsive disorder have raised activity in certain specific neural networks in the caudate nucleus of the brain. He could identify these networks on PET scan, and show how, once the OCD was treated, these networks ceased to be active. Research on Post Traumatic Stress Disorder has also shown the state-dependent nature of its symptoms (van der Kolk et alia, 1996, p291-292). Sudden re-experiencing of a traumatic event (called a flashback) is one of the key problems in PTSD. Medications which stimulate body arousal (such as lactate, a by-product of physiological stress) will produce flashbacks in people with PTSD, but not in people without the problem (Rainey et alia, 1987; Southwick et alia, 1993). Other laboratory studies show that sensory stimuli which recreate some aspect of the original trauma (such as a sudden noise) will also cause full flashbacks in people with PTSD (van der Kolk, 1994). This phenomenon is Pavlov's "classical conditioning", also known in NLP as "anchoring". State dependent learning is the biological process behind classical conditioning.

[Note: Using NLP with state dependent learning problems forms a large part of the subject matter of the book 'Trance-formations']

Rapport: The Work of The Mirror Neurons[edit]

In 1995 a remarkable area of neurons was discovered by researchers working at the University of Palma in Italy (Rizzolatti et alia, 1996; Rizzolatti and Arbib, 1998). The cells, now called "mirror neurons", are found in the pre- motor cortex of monkeys and apes as well as humans. In humans they form part of the specific area called Broca’s area, which is also involved in the creation of speech. Although the cells are related to motor activity (ie they are part of the system by which we make kinaesthetic responses such as moving an arm), they seem to be activated by visual input. When a monkey observes another monkey (or even a human) making a body movement, the mirror neurons light up. As they do, the monkey appears to involuntarily copy the same movement it has observed visually. Often this involuntary movement is inhibited by the brain (otherwise the poor monkey would be constantly copying every other monkey), but the resulting mimickery is clearly the source of the saying "monkey see, monkey do".

In human subjects, when the brain is exposed to the magnetic field of transcranial magnetic stimulation (TMS), thus reducing conscious control, then merely showing a movie of a person picking up an object will cause the subject to involuntarily copy the exact action with their hand (Fadiga et alia, 1995). This ability to copy a fellow creatures actions as they do them has obviously been very important in the development of primate social intelligence. When this area of the brain is damaged in a stroke, copying another’s actions becomes almost impossible. The development of speech has clearly been a result of this copying skill. There is increasing evidence that autism and Aspergers syndrome are related to unusual activity of the mirror neurons. This unusual activity results in a difficulty the autistic person has understanding the inner world of others, as well as a tendency to echo speech parrot-fashion and to randomly copy others’ movements (Williams et alia, 2001).

Mirror neurons respond to facial expressions as well, so that they enable the person to directly experience the emotions of those they observe. This results in what researchers call "emotional contagion" – what NLP calls rapport (Hatfield et alia, 1994).