Showing posts with label language and the brain. Show all posts
Showing posts with label language and the brain. Show all posts

Sunday, February 27, 2011

Can a computer be smart or can it only be programmed to act that way? (Books - Final Jeopardy by Stephen Baker)

Final Jeopardy: Man vs. Maching and the Quest to know Everything concerns itself with the stunt planned by IBM to program a computer to play Jeopardy against human contestants. But Stephen Baker’s swift moving book doesn’t merely reveal backstage gossip like why the computer was named Watson or who designed his 'face,' the real reason to read this book is the story Baker tells about the nature of intelligence and whether machines can possess and use it.

 Baker, as IBM did, used the contest as a scaffold. For IBM this narrowed the scope of their research and imposed a schedule. For Baker it lent his narrative thrust toward a conspicuous end. It's almost a shame that we know the outcome, the contest having already been aired, but Baker writes well enough to squeeze suspense out of the story by connecting us to the stakes experienced by scientist David Ferrucci, and his team of programmers, designers, former Jeopardy contestants, and, of course, the public relations armies for IBM and Jeopardy.

I enjoyed Baker's lay-descriptions of the evolution of computing machines and how they differ from human brains: what kind of knowledge goes into them, what sorts of computations can be expected of them, what kinds of mistakes they make, how computers can learn and how we, in turn, can learn about the nature of intelligence through the exercise of programming them to do so.

For certain types of questions, Ferrucci said, a search engine could come up with answers. These were simple sentences with concrete results, what he and his team called factoids. For example: "What is the tallest mountain in Africa?" A search engine would pick out the three key words from that sentence and in a fraction of a second suggest Kenya's 19,340-foot-high Kilimanjaro. This worked, Ferrucci said, for about 30 percent of Jeopardy questions. But performance at that low level would condemn Watson to defeat at the hands of human amateurs.

Baker is adept at explaining how fact-based knowledge can be stored in and retrieved from the neural networks of the human brain. Key to his story about so-called artificial intelligence, he makes clear the difference between designing a machine that actually performs the steps of human-like cognitive processes in a silicon medium rather than an organic one versus one that looks as though it is problem-solving like a person, but is actually coming up with the answer via different processes.

Equally interesting was the discussion Baker's book provokes about what knowledge is for. This highlights the poverty of Ferrucci's tightly-focused imagination.
"You can probably fit all the books that are on sale on about two terabytes that you can buy at OfficeMax for a couple of hundred dollars. You get every book. Every. Single. Book. Now what do you do? You can't read them all! What I want the computer to do," he went on, "is to read them for me and tell me what they're about, and answer my questions about them. I want this for all information. I want machines to read, understand, summarize, describe the themes, and do the analysis so that I can take advantage of all the knowledge that's out there. We human's need help. I know I do!" 
Actually, you can't get all the information period, and the best computer can't either, so calm down, David. Knowledge is not just possessing facts, nor is it analyzing them. Analysis takes place at multiple levels. Merely determining which units within a narrative are the facts is itself analysis. And just what are facts: what are the facts of Oliver Twist, for example? Is Fagin a fact? Is the theft of a pocket handkerchief a fact? Is “some more?” Facts and the juicy stuff that can be derived from them are determined by an intersection with the point-of-view of the individual using them.

To be fair, Ferrucci understands these limitations and his project embraces the challenge of finding a solution. Ultimately, the machines that we can imagine now will likely be better at generating lists for hypothesis development than they will be at making inferential leaps. The gains made in problem solving by sudden departures from the knowledge tree or standard method are legendary - that's the 'creative' part of creative problem solving and is the very stuff of the creative leap that so often precedes a solution.

Other scientists, such as Joshua Tenebaum at MIT think that one day computers will generate concepts and make inductive leaps, but that is hard to imagine reading Baker’s account of parsing the words of a single Jeopardy question well enough to determine the category of knowledge to search (let alone to answer it). A great deal more than computing speed is necessary before computers can accurately comprehend human emotion, make inferences, and take control from their inventors like Hal in 2001: A Space Odyssey. Tenenbaum said it best:

"If you want to compare [artificial intelligence] to the space program, we're at Galileo," Tenenbaum said. "We're not yet at Newton." 
Baker's book is thoughtful, informative, and really amusing, without being pseudo-science. I'm passing my copy along to my Uncle who was a contestant on Jeopardy in the 1970s. I think he should get a kick out of it.

Sunday, June 13, 2010

Battle of the brains...

I haven't actually read Nicholas Carr's new book yet - The Shallows: What the Internet is Doing to Our Brains, but as far as I have heard in two interviews with the author, the book is an expansion on his 2008 article Is Google Making Us Stupid? - Magazine - The Atlantic? It explores the impact of recent information technology on our intellects. The reviews I have read stress that the book is non-polemic and balanced argument but that hasn't stopped others from getting their knickers twisted in a knot. Steven Pinker offered a short, worthwhile counter-argument in this week's New York Times. Yes, the brain has and will continue to evolve in relation to the media in which it is steeped. We certainly cannot stop that process. Is Carr just in mourning for the change he fears because his business is narrative? Is he Chicken Little or does he have a valid point? Brains are admittedly diverse in their ability to concentrate broadly versus deeply. That would be true with or without the internet. Most people are in the middle of the curve. At each extreme end of that continuum is a cognitive style that is the hallmark of a diagnosable condition - Attention Deficit Disorder is characterized by (among other things) an inability to sustain attention on one point for an extended period of time. On the other end are Autistic Spectrum Disorders which are characterized by (among other things) a cognitive style that gets involved more deeply in details than the gist of things, and those on the spectrum generally have a harder time shifting from one point of concentration to another. Each of those cognitive styles has its assets and liabilities. Your technology-addled brain is here reading my blog, but this is a bookish blog and therefore you probably also manage to concentrate on a fair bit of full-length narrative, so have you read Carr's book? Will you? Personal feelings are not study data but what do you think?

Wednesday, May 12, 2010

Reading with recycled neurons (Books - Reading in the Brain by Stanislas Dehaene)


Reading in the Brain by Stanislas Dehaene is a readable account of how the reading brain works, as well as how it doesn't.  Given the fact that printed text is a relatively recent human invention and, given the time scale of evolution, the brain could not have evolved specialized structures for reading per se but rather has adapted structures that evolved for more general visual purposes and applied them to this specialized task which combines seeing and recognizing objects and the reception of the abstract thoughts of another person. The form of letters have little to do with the meaning they ultimately communicate. As a result it has become the work of seven to ten of our earlier years to learn rote the relationship between our culture's symbols representing the units of sound (phonemes).  These are combined into (words) from which we generate continuous phrases and sentences which accomplish the transfer of information and point of view. This is done at a minimum with some coherence, if not also some beauty, and the composer of those same units of meaning doesn't even have to be around to explain himself. It might seem roundabout that printed text has to take a two step journey from sound to meaning rather than going straight to meaning, but this is what allows language to communicate abstract thought:
I suspect that any radical reform whose aim would be to ensure a clear, one-to-one transcription of English speech would be bound to fail, because the role of spelling is not just to provide a faithful transcription of speech sounds. Voltaire was mistaken when he stated, elegantly but erroneously, that "writing is the painting of the voice: the more it bears a resemblance, the better." A written text is not a high-fidelity recording. Its goal is not to reproduce speech as we pronounce it, but rather to code it at a level abstract enough to allow the reader to quickly retrieve its meaning.
The brain accomplishes this remarkable feat, Dehaene tells us (without even being there), via two pathways that operate simultaneously when reading is fluent. One path transfers the letter-string to it sound content (and the motor requirements of our making that sound with our vocal apparatus) prior to its meaning, and the other that goes for the identity of the word first and then the sound. A great number of pages in the book is spent on discussing the fruits of Dehaene and his colleague Laurent Cohen's labors identifying the left hemisphere's visual word-form area, a region of the brain whose purpose seems to be to be the visual analysis of the symbols that make up letters and words irrespective of their superficial differences. That is to say we can read word, WORD, or even WoRd equally easily and can tell the difference between ANGER and RANGE. You can see the visual word-form area in the picture representing the relative activity of regions of Dehaene's reading brain below, its the area right above his ear. There are other areas that accomplish the conversion of printed text into units of sound, still others that agree on the meaning of the assembly given not only its form but its context.
The typical right-handed person's brain has developed most of its key language processing areas in the left hemisphere (left handers are less reliable in this regard). This is true whether the personn reads from left to right or right to left and whether they read an alphabet whose symbols map to units of sound (as in these roman letters you are reading right now) or comprise pictures of whole words (as in logographic alphabets like Chinese). Dehaene, in fact, explores the evolution of different writing systems from pictoral markers in depth as he builds his case for how the human brain evolved the skill of reading, a section of the book I very much enjoyed.

Much of this case centers on the brain's ability to adapt cortex to multiple functions, something he calls neuronal recycling.
We would not be able to read if our visual system did not spontaneously implement operations close to those indispensable for word recognition, and if it were not endowed with a small dose of plasticity that allows it to learn new shapes. During schooling, a part of this system rewires itself into a reasonably good device for invariant letter and word recognition.

According to this view, our cortex is not a blank slate or a wax tablet that faithfully records any cultural invention, however arbitrary. Neither is it an inflexible organ that has somehow, over the course of evolution, dedicated a "module" to reading. A better metaphor would be to liken our visual cortex to a Lego construction set, with which a child can build the standard model shown on the box, but also tinker with a variety of other inventions.

My hypothesis disagrees with the "no constraints" approach so common in the social sciences, according to which the human brain is capable of absorbing any form of culture. The truth is that nature and culture entertain much more intricate relations. Our genome, which is the product of millions of years of evolutionary history, specifies a constrained, if partially modifiable cerebral architecture that imposes severe limits on what we can learn. New cultural inventions can only be acquired insofar as they fit the constraints of our brain architecture.
In the book's final chapter, Dehaene discusses cortical plasticity - a neuroscientific idea that is relatively recent and much in vogue.  It is the ability of brain's neuron's to adapt their function from one purpose to another - for example, when a blind person's visual cortex cells become able to decode sensation of the fingertips to braille letters.  Dehaene makes a case for the necessity of cortical plasticity in inventing cultural forms like number systems and the arts. It's one of those fun bits of reaching for the stars that a researcher has to save for when they write a book rather than a journal article. His voice comes off a little stuffy at times, but his theory is intricate - composed of many interleaving units - so his writing must be systematic in driving home each concept and then attaching is to its predecessor. His model for how the brain accomplishes reading, I must emphasize, is one of several, but he does acknowledge alternate viewpoints along the way. The lay reader may not find this book as accessible as Maryanne Wolf's Proust and the Squid, but it goes into more depth and synthesizes a lot of information into a coherent narrative arc. His diction is clear and the reading experience fluid and even entertaining. Dehaene's work is at the cutting edge of our understanding about the relationship between language and the brain so I found it a pleasure to get the story from one of its sources.

If you in or around NYC on Thursday March 13 at 6pm, join me for a film about dyslexia called THE BIG PICTURE: RETHINKING DYSLEXIA.  Click here for information.

Saturday, April 24, 2010

Killing giants, myth busting and other valiant deeds (Books - The Oxford Book of Modern Science Writing & Reading in the Brain)



I just came across two excerpts in Richard Dawkins's collection of science writing that reminded me what gives me pleasure about reading science. One is geneticist J. B. S. Haldane's "On Being the Right Size" and the other, Zoologist Mark Ridley's "On Being the Right Sized Mates" from his 1983 book The Explanation of Organic Diversity. Haldane's essay muses on how the physical structure of animals evolved to the "right" size for their makeup. As an example he uses the giants from the books of his childhood,
These monsters were not only ten times as high as Christian, but ten times as wide and ten times as thick, so that their total weight was a thousand times his, or about eighty to ninety tons. Unfortunately the cross-sections of their bones were only a hundred times those of Christian, so that every square inch of giant bones had to support ten times the weight borne by a square inch of human bone. As the human thigh-bone breaks under about ten times the human weight, Pope and Pagan would have broken their thighs every times they took a step. This was doubtless why they were sitting down in the picture I remember. But it lessens one's respect for Christian and Jack the Giant Killer.
I love the fact that Haldane uses his childhood memory of Pilgrim's Progress to communicate to his reader what he is thinking about the physiological evolution of animal life. The best teaching occurs, I think, when the teacher themselves can get back to what initially ignited their own interest about their subject and communicate from that vantage point. Haldane then goes on a musing spree which covers the structure of members of the animal kingdom ranging from insects to giraffes. How an insect's structure allows it to fall without danger, but if it gets wet it is likely to drown. Tall animals require a certain strength pump and vessels for the circulatory system that convey blood to their extremities, however, this puts them at risk for high blood pressure or problems associated with vascular weakness. How do wings permit flight? How do different respiratory structures - those that have evolved with gills and those with lungs - accomplish the oxygenation of blood? And given these diverse means, what are the upper limits of the size of the animal that possesses them? Haldane not only informs us of the vagaries of the natural world we are a part of, he communicates the verve with which he observes that world, and with witty prose drives the reader forward. It is little wonder that he inspired Mark Ridley's observation that species have evolved to favor homogamy, that is, like mates with like. What I enjoyed about this brief essay is Ridley's debunking of the well-entrenched myth that in human affairs of the heart, opposites attract.
'it is a trite proverb that tall men marry little women...a man of genius marries a fool,' a habit which Murray explained as 'the effort of nature to preserve the typical medium of the race.' The same thought was expressed by the vast intellect of Jeeves, to explain the otherwise mysterious attractions of Bertie for all those female enthusiasts of Kant and Schopenhauer. The source of this proverbial belief is not certainly known; but one possibility can be ruled out. It did not originate in observation: humans mate homogamously (or perhaps randomly) for both stature and intelligence.
Myth-busting is not just a darn good time - especially when indulged in with such gusto - but doubting our assumptions is vital to the continued development of our knowledge. This is also another of many instances in Dawkins's juicy compendium in which learning something new is married to lucid, entertaining writing. The danger of this volume, however, is its tendency to bloat the TBR list. I've made it a rule to only jot down my desired titles at this point, and not engage in any impulse buying. We'll see how long that holds! Here are my other posts related to The Oxford Book of Modern Science Writing 1, 2.

I have also begun Stanislas Dehaene's recent book Reading in the Brain and, speaking of gills, it is packed to them with information about how the brain accomplishes the act you are performing right now - reading. An accessible and engagingly written volume. I tore through the first 60 pages. More on that soon.

Tuesday, April 22, 2008

Mistakes, music, language, and individuality


It's Tuesday and you know what that means.... the Science Times. A few articles caught my eye today, one by Karren Barrow about a man who lost his ability to speak following a stroke (aphasia) and was able to learn to speak again with the help of melodic intonation therapy. This therapy seems to take advantage of the fact that certain aspects of singing, especially the melodic part, are typically accomplished by the right hemisphere of the brain while most language functions are housed on the left, where this patient experienced damage from the stroke. It's almost as if the right hemisphere is dragging the left along. It seems to be particularly helpful with speech spontaneity.

There is a little blurb by Henry Fountain that describes a study by Tom Eichele, a Norwegian researcher, and his colleagues that images the brain prior to making errors to learn what is different from when we perform well. There were two things I appreciated about this article. One was a few simple words in the writing which did not use the typical language of popular science with regard to fMRI.

** a little lecture about fMRI follows, which you are welcome to skip**
Usually authors describe the brain "lighting up" and make a direct link between the brain's relative use of oxygen in some parts of the brain as compared to others and its activation. The basis of this type of fMRI (there are several but this is a pretty common one), is that the 'f,' standing for functional, gives us a picture of a brain in action. It does this by measuring which regions of the brain are metabolizing the most oxygen at a certain moment in time, and describes those as the regions which are most active. Not an unreasonable assumption. The fMRI is a giant magnet, sensitive to the oxygen level of hemoglobin - the chemical in our blood which carries oxygen. That chemical is more strongly magnetic in its deoxygenated state than when it is bound to oxygen. That difference is what the magnet senses, and tiny regions measuring only a millimeter by a millimeter can be coded as to their relative blood oxygen levels. There has to be an adjustment for the time lag that exists between the metabolism of the oxygen and the blood oxygen level which is not directly measured, rather the fMRI is sensing the by-product that remains once the oxygen is used up. This is one of the many complications inherent in designing a meaningful fMRI experiment. There is in addition some disagreement as to exactly what physiological state is being reflected by the difference in oxygenation level. Most fMRI literature suggests that blood oxygen level is directly analogous to 'activation' of brain regions but what does that mean? The firing of a nerve cell or many cells within that 1mm volume (known as a voxel)? Neurons can be excitatory or inhibitory. Are those neurons all excitatory? Doesn't an inhibitory neuron firing also use up energy? Are we seeing a net measure of activation? My point is that fMRI is a complex measure. What a given series of brain images suggest should be described thoughtfully as the brain is not actually 'lighting up' at all. This article chooses to do that by carefully describing fMRI as a 'measure of oxygenation' and I appreciated that. Whew long tangent.
**Lecture ends here**

The other thing I liked about this article was the science itself. This experiment makes use of a fairly controversial idea - the so-called default mode region - which is a network of brain regions that are more relatively active when a person is relaxed and "at rest." Don't get me started with the scientific and philosophical problems of measuring rest. I won't go there today. This study suggest that those regions become more activated just prior to an error. As problematic as the whole default-mode idea can be, I enjoy the scientific sense behind the idea that a brain preparing to rest rather than to perform an activity would perform it less well. It almost sounds liking going to sleep at the wheel!

I also really enjoyed the article by Christine Kenneally on the impact of language on thought. One of my main research interests is top-down impact on perception - how an individual brain's experiences mold perception. This article about is written in a thorough format and offers a reading list on language and thought that many of you might enjoy.


Finally, Carl Zimmer writes interestingly and amusingly on diversity at the bacterial level. With the simplicity of a bacterial life form, you wouldn't expect much possibility of individuation, but research on the E. Coli belies that expectation.