I wanted to talk about autism today because that what I’m reading about, but I think I’ll devote this post to some conceptual information regarding sensory integration itself and get to the subject of autism in a later post.
Think about what happens when we perceive an object. It’s amazing! Separate systems with highly specialized functions
Sensory Integration
Multisensory integration is the binding of parts to create the experience of a whole object, which is our usual experience. If we see and hear a red car driving by, the separate systems of our visual apparatus deals with shape, color, movement – all the visual components – another system processes the auditory components. At some point they must be reassembled if we are to experience a red car driving by. Once it was thought that visual and auditory information from a single object was first processed separately, in the more primary sensory areas of the brain, combining later in higher multisensory areas - when the individual features are put back together. However, given the different lengths and thicknesses of sight and sound neurons, some of the information would have to wait for the other – causing a traffic jam. Some of the people I work measured electrical brain activity in order to understand when audiovisual integration happens in the process of perception. Areas of the brain that were originally thought to be devoted exclusively to early processing of just one sense, demonstrated that they were processing information from two senses just 46 milliseconds after the appearance of the object. In layman’s terms – that is really, really fast. That would be too little time for information to have been relayed all the way up the visual pathways to the higher levels where integration was thought to happen, and feed all the way back down to the auditory areas. They concluded that integration had to happen much earlier. This was supported by evidence that sensory integration remains intact even when those areas that were once thought to be higher multisensory processing areas are damaged. For those of you interested in a more thorough version, here’s a link to the lab’s papers on sensory integration. Molholm, et. al. 2002 is a seminal one.
Early low-level multisensory integration would be helpful in conceiving of a solution for the binding problem. Establishing an association between separate sensory systems that have processed information independently and at different rates would be extraordinarily difficult. Furthermore, as information moves through the system, it is recoded many times along the way (an explanation follows below). Early cross-communication between separate inputs prior to several generations of recoding would be more efficient
I’ll talk just about the visual system now. It was thought, until only a few years ago, that visual information moves in only one direction. It enters the retina,
The term bottom-up is used to describe the flow of information going from lowest level to the higher
That two-way stream is feeding information from the top-down, and that has an important implication about what we’re seeing. Think about our common conception of seeing – “seeing is believing” “I saw it with my own eyes” “there was a witness” – all of these phrases equate seeing with truth. We value the evidence given us by our trustworthy eyes and think that what we see is an accurate representation of what is in the outside world…is it?
Recoding and what this all might mean
I look at it this way: there is only one time the object in the external world is truly unadulterated - that is in that external world, prior to its entry into our perceptive systems. When it enters our sensory equipment it is immediately in pieces. The whole red car does not drive onto our retina – its features are apprehended by neurons, each designed to convert light into code that our nervous system can read. It travels through system, each processing step means new neurons with slightly different specializations are functioning and information is recoded at each step. The primary visual cortex predominantly processes features. The fourth level of the visual processing stream processes associations of many features – those neurons don’t “read” the same language and so translation, or recoding, is necessary. Finally information is integrated, i.e. visual code, auditory code, and tactile code, are translated into multi-sensory code. We are constantly creating new wholes that get further and further from the original red car in actual form – even though at our highest conceptual processing levels we identify the information as ‘red car.’ BUT – and this is a big but – parallel to that process – something is keeping track of all those features so that eventually they can all be put back together as part of the same whole and referenced to other cars we have stored in memory.
So the three take-home messages would be:
- perceiving an object requires integrating information from all the different senses
- the pathway that leads to perception is a two-way stream - high-level information feeds down to areas that analyze basic information and vice-versa
- as information travels the pathway it is recoded many times since information from the external world (color, three dimensional objects in space) cannot literally occupy space in our heads
Each of our experiences is unique
I think this is an extraordinary concept - perception is in some ways a creative act. And this is a satisfying notion for the artist/neuroscientist in me. A composer builds a piece of music from the same notes all the other composers have, but they neverwrite the same piece. The same red car can pass both of us on the same street, yet we will create perceptions based on the context that is ourselves. The process by which our brainscreate visual images (internal representations experienced as visual without no external information causing them) has even more parallels with creating a work of art, but I'll save that for another day. Each of our brains are remarkable creators day in and day out every time we look and see – oh my god, how exhausting!
Since this is my first neuro-post I'd appreciate feedback from you, especially if you're not in the field. Was it comprehensible to you? Was it interesting? Do you want more? Would you rather be strung up by your toenails - let me know.
4 comments:
As you know, I can't get enough of this stuff. To discuss something as perceiving a red car is complex enough ... but how about perceiving a SELF, perceiving consciousness ... awareness, etc. How on EARTH did all of this happen?
It is truly mind-boggling to me.
Like Goldbug Variations. "What could be simpler?" And yet ....
Amazing stuff.
From a layperson's perspective (although admittedly I did physiological psychology at uni and it certainly is fascinating), I think you explained yourself very well, and even someone who hasn't thought about such processes before would get the gist of what you're driving at.
I'd like to hear more from you about the latter part of your post; I do strongly believe in each individual person bringing their own understanding of the world, developed and continually developing though their understanding is, to generate a new take on each thing they encounter - as you say, the red car doesn't drive onto the retina.
I think in part the paradigms that we develop on things that are familiar to us, like the red car, would also impact the processing speed of an object/event.
Shiela, the concept of self, meta awareness (awareness of being aware), and meta cognition in general are, believe it or not, fields actually studied by neuroscientists and philosophers and those riding the razor's edge between.
Siew - Thanks for the feedback. Familiarity does indeed have an effect on processing speed, whether we're talking about accessing memory or perceiving something. The more familiar a concept the more associations we have built for it and the more well worn the path to it and, therefore, the faster the access (to a point).
Individuals each have unique brains, built from their unique experiences and physiology/ anatomy. Since top-down processes likely influence most perceptive acts, each person is seeing things a bit differently. That's not to say I don't believe there are ways to share what we have in common. As people develop they acquire "theory of mind," the intuitive understanding that others have thoughts and feelings that are similar to our's and can have a different point of view. In kids you can test this with a little story about mary putting a pencil in a basket and then going outside. Her friend april comes in and moves the pencil to a box. When mary returns, where will she look for the pencil? Kids with a normally developed theory of mind will say that she will look in the basket, however, kids with theory of mind deficits would probably say she would look in the box. They know they pencil is in the box so they can't imagine how mary could not. Theory of mind deficits are seen in autism and are thought to be an aspect of social communication.
I think theory of mind and language skills (both develop innately in healthy kids)are adaptations that allow us to to connect, share our unique perspectives, and cooperate with each other - otherwise we would all be isolated and likely not survive.
Post a Comment