Monday, June 4, 2007

Reflections on Neuroscience: Are We Seeing the World or Creating It? Perception & Sensory Integration

From time to time I will be posting on neuroscientific areas of interest– from published studies, more mainstream reporting on science, and the work that I’m doing at the lab, or in classes. My driving interest is human behavior – how it happens, what is behind it. How does a brain make a mind, a ‘self?’ Human behavior is fascinating to me, so whether I’m teaching acting, directing theater, studying the brain’s processing of sensory information, or reading a about a character in a book – that’s what motivates my interest and is, no doubt, much of what you will read at this address. The study I’m working on at the lab looks at sensory integration – the ability to take all the information from our separate senses – and combine it as one experience. The lab I’m lucky enough to work for did some of the landmark conceptual work in this area. One of their important experiments helped establish the timing of this process of integration in normal adult brains. Our current experiment collaborates with a neuropsychologist's lab to compare that process in “normally” developing brains in 5-15 year olds, and the same age kids with developmental disorders like autism or Asperger’s Syndrome, attention disorders like ADHD, and sensory integration problems.

I wanted to talk about autism today because that what I’m reading about, but I think I’ll devote this post to some conceptual information regarding sensory integration itself and get to the subject of autism in a later post.

Think about what happens when we perceive an object. It’s amazing! Separate systems with highly specialized functions (our eyes, ears, etc.) take in very discrete aspects of that object, e.g. its horizontal edges or curves – its features. In order to recognize a word, we must perceive its letters, and in order to perceive those letters we need to construct them from lines and loops, and that means that at some point they must be put back together. The “binding problem” is how, throughout the isolated processing of separate features, we keep track of those features as parts of that one object. It is thought that attention might cause the neurons processing the features in separate cognitive systems to fire in the same rhythm flagging the features that must be put back together.


Sensory Integration
Multisensory integration is the binding of parts to create the experience of a whole object, which is our usual experience. If we see and hear a red car driving by, the separate systems of our visual apparatus deals with shape, color, movement – all the visual components – another system processes the auditory components. At some point they must be reassembled if we are to experience a red car driving by. Once it was thought that visual and auditory information from a single object was first processed separately, in the more primary sensory areas of the brain, combining later in higher multisensory areas - when the individual features are put back together. However, given the different lengths and thicknesses of sight and sound neurons, some of the information would have to wait for the other – causing a traffic jam. Some of the people I work measured electrical brain activity in order to understand when audiovisual integration happens in the process of perception. Areas of the brain that were originally thought to be devoted exclusively to early processing of just one sense, demonstrated that they were processing information from two senses just 46 milliseconds after the appearance of the object. In layman’s terms – that is really, really fast. That would be too little time for information to have been relayed all the way up the visual pathways to the higher levels where integration was thought to happen, and feed all the way back down to the auditory areas. They concluded that integration had to happen much earlier. This was supported by evidence that sensory integration remains intact even when those areas that were once thought to be higher multisensory processing areas are damaged. For those of you interested in a more thorough version, here’s a link to the lab’s papers on sensory integration. Molholm, et. al. 2002 is a seminal one.

Early low-level multisensory integration would be helpful in conceiving of a solution for the binding problem. Establishing an association between separate sensory systems that have processed information independently and at different rates would be extraordinarily difficult. Furthermore, as information moves through the system, it is recoded many times along the way (an explanation follows below). Early cross-communication between separate inputs prior to several generations of recoding would be more efficient (although nothing promises that brain processes are actually efficient, neuroscientists love to think they are). It also suggests a system that uses both feedforward AND feedback pathways – which is borne out physiologically and is another really cool aspect of perception to ponder.


Visual Processing Stream


I’ll talk just about the visual system now. It was thought, until only a few years ago, that visual information moves in only one direction. It enters the retina, (I’ll do a simplified version) travels to the thalamus, (a nexus in the center of brain, where connections from many areas meet, located beneath the outer layer of the brain known as the cortex), travels from there down the optic radiations to the primary visual cortex (in the occipital lobe located at the back of your head more or less where the skull bumps out an inch above where the soft part of your neck begins), then moves up through the visual areas higher and higher. Lower level or primary information would be details like horizontal and vertical edges – fine details, high resolution stuff – the earliest areas have neurons that are tuned to process only that kind of information. As we proceed up the visual pathway, higher area’s neurons levels are sensitive to less fine grained, more global information – e.g., color – higher still would be areas where associations between features are made. Finally, frontal lobe areas connect that information to memory areas where matches to previously seen whole objects are made, and we can be said to have “recognized” them. What has been observed is that the connections go both ways – forward and back. In fact, between certain levels of the visual system there are MORE connections seen going backwards than forwards. Many people (but not all) interpret that to mean that the information travels both ways. Scientists in the lab where I work were also responsible for some of the work that proposed this reconception of perception. If you want to read one of the papers I’d say Foxe & Simpson, 2002 is a really good one.

The term bottom-up is used to describe the flow of information going from lowest level to the higher (the object outside you, to your retina, to your visual cortex, to your frontal cortex) and top-down to describe the flow of information that proceeds from higher areas to lower. Top-down flow may sound counter-intuitive but were doing it all the time. It happens, for example, when we perceive a letter as part of a word. We perceive letters in the context of words faster than letters alone (measured in milliseconds) – why? – because, having stored a word in our memory with the many associations we have made in reference to it, the features that comprises that letter travel along a better paved road, both speeding up the process that leads to our perceiving that letter and saving on the cognitive effort needed to do so. But according to many researchers in the area, top-down processes are a normal part of every perceptive act we perform.

That two-way stream is feeding information from the top-down, and that has an important implication about what we’re seeing. Think about our common conception of seeing – “seeing is believing” “I saw it with my own eyes” “there was a witness” – all of these phrases equate seeing with truth. We value the evidence given us by our trustworthy eyes and think that what we see is an accurate representation of what is in the outside world…is it?



Recoding and what this all might mean
I look at it this way: there is only one time the object in the external world is truly unadulterated - that is in that external world, prior to its entry into our perceptive systems. When it enters our sensory equipment it is immediately in pieces. The whole red car does not drive onto our retina – its features are apprehended by neurons, each designed to convert light into code that our nervous system can read. It travels through system, each processing step means new neurons with slightly different specializations are functioning and information is recoded at each step. The primary visual cortex predominantly processes features. The fourth level of the visual processing stream processes associations of many features – those neurons don’t “read” the same language and so translation, or recoding, is necessary. Finally information is integrated, i.e. visual code, auditory code, and tactile code, are translated into multi-sensory code. We are constantly creating new wholes that get further and further from the original red car in actual form – even though at our highest conceptual processing levels we identify the information as ‘red car.’ BUT – and this is a big but – parallel to that process – something is keeping track of all those features so that eventually they can all be put back together as part of the same whole and referenced to other cars we have stored in memory.


So the three take-home messages would be:

  • perceiving an object requires integrating information from all the different senses
  • the pathway that leads to perception is a two-way stream - high-level information feeds down to areas that analyze basic information and vice-versa
  • as information travels the pathway it is recoded many times since information from the external world (color, three dimensional objects in space) cannot literally occupy space in our heads

Each of our experiences is unique (even twins who share an extraordinary amount of similarities occupy different skins, different physical spaces, and therefore do not see exactly the same thing). Given the fact that each of us experience different things, each of our memories possess different stores of experiences and therefore, each of our top-down processes reference themselves to different associations. Each person is his or her own context. If context changes meaning - and it does - then I create each time I perceive. I create perceptions even if they come from existing sensory information because they are recoded in the context of me, in a way no one else could recode them - how I recombine them is original. The red car itself does not exist in my head, only the re-coded information from the object and the referent visual, semantic, and acoustic information held in my memory are. I must create that car when cued to do so by my environment. No one else will see it with my perceptual apparatus, from the point in space and time that I saw it. That information is perceived because it is recombined in me today instead of yesterday.

I think this is an extraordinary concept - perception is in some ways a creative act. And this is a satisfying notion for the artist/neuroscientist in me. A composer builds a piece of music from the same notes all the other composers have, but they neverwrite the same piece. The same red car can pass both of us on the same street, yet we will create perceptions based on the context that is ourselves. The process by which our brainscreate visual images (internal representations experienced as visual without no external information causing them) has even more parallels with creating a work of art, but I'll save that for another day. Each of our brains are remarkable creators day in and day out every time we look and see – oh my god, how exhausting!


Since this is my first neuro-post I'd appreciate feedback from you, especially if you're not in the field. Was it comprehensible to you? Was it interesting? Do you want more? Would you rather be strung up by your toenails - let me know.

4 comments:

Anonymous said...

As you know, I can't get enough of this stuff. To discuss something as perceiving a red car is complex enough ... but how about perceiving a SELF, perceiving consciousness ... awareness, etc. How on EARTH did all of this happen?

It is truly mind-boggling to me.

Like Goldbug Variations. "What could be simpler?" And yet ....

Amazing stuff.

Anonymous said...

From a layperson's perspective (although admittedly I did physiological psychology at uni and it certainly is fascinating), I think you explained yourself very well, and even someone who hasn't thought about such processes before would get the gist of what you're driving at.

I'd like to hear more from you about the latter part of your post; I do strongly believe in each individual person bringing their own understanding of the world, developed and continually developing though their understanding is, to generate a new take on each thing they encounter - as you say, the red car doesn't drive onto the retina.

I think in part the paradigms that we develop on things that are familiar to us, like the red car, would also impact the processing speed of an object/event.

Ted said...

Shiela, the concept of self, meta awareness (awareness of being aware), and meta cognition in general are, believe it or not, fields actually studied by neuroscientists and philosophers and those riding the razor's edge between.

Ted said...

Siew - Thanks for the feedback. Familiarity does indeed have an effect on processing speed, whether we're talking about accessing memory or perceiving something. The more familiar a concept the more associations we have built for it and the more well worn the path to it and, therefore, the faster the access (to a point).

Individuals each have unique brains, built from their unique experiences and physiology/ anatomy. Since top-down processes likely influence most perceptive acts, each person is seeing things a bit differently. That's not to say I don't believe there are ways to share what we have in common. As people develop they acquire "theory of mind," the intuitive understanding that others have thoughts and feelings that are similar to our's and can have a different point of view. In kids you can test this with a little story about mary putting a pencil in a basket and then going outside. Her friend april comes in and moves the pencil to a box. When mary returns, where will she look for the pencil? Kids with a normally developed theory of mind will say that she will look in the basket, however, kids with theory of mind deficits would probably say she would look in the box. They know they pencil is in the box so they can't imagine how mary could not. Theory of mind deficits are seen in autism and are thought to be an aspect of social communication.
I think theory of mind and language skills (both develop innately in healthy kids)are adaptations that allow us to to connect, share our unique perspectives, and cooperate with each other - otherwise we would all be isolated and likely not survive.