Sunday, February 27, 2011

Can a computer be smart or can it only be programmed to act that way? (Books - Final Jeopardy by Stephen Baker)

Final Jeopardy: Man vs. Maching and the Quest to know Everything concerns itself with the stunt planned by IBM to program a computer to play Jeopardy against human contestants. But Stephen Baker’s swift moving book doesn’t merely reveal backstage gossip like why the computer was named Watson or who designed his 'face,' the real reason to read this book is the story Baker tells about the nature of intelligence and whether machines can possess and use it.

 Baker, as IBM did, used the contest as a scaffold. For IBM this narrowed the scope of their research and imposed a schedule. For Baker it lent his narrative thrust toward a conspicuous end. It's almost a shame that we know the outcome, the contest having already been aired, but Baker writes well enough to squeeze suspense out of the story by connecting us to the stakes experienced by scientist David Ferrucci, and his team of programmers, designers, former Jeopardy contestants, and, of course, the public relations armies for IBM and Jeopardy.

I enjoyed Baker's lay-descriptions of the evolution of computing machines and how they differ from human brains: what kind of knowledge goes into them, what sorts of computations can be expected of them, what kinds of mistakes they make, how computers can learn and how we, in turn, can learn about the nature of intelligence through the exercise of programming them to do so.

For certain types of questions, Ferrucci said, a search engine could come up with answers. These were simple sentences with concrete results, what he and his team called factoids. For example: "What is the tallest mountain in Africa?" A search engine would pick out the three key words from that sentence and in a fraction of a second suggest Kenya's 19,340-foot-high Kilimanjaro. This worked, Ferrucci said, for about 30 percent of Jeopardy questions. But performance at that low level would condemn Watson to defeat at the hands of human amateurs.

Baker is adept at explaining how fact-based knowledge can be stored in and retrieved from the neural networks of the human brain. Key to his story about so-called artificial intelligence, he makes clear the difference between designing a machine that actually performs the steps of human-like cognitive processes in a silicon medium rather than an organic one versus one that looks as though it is problem-solving like a person, but is actually coming up with the answer via different processes.

Equally interesting was the discussion Baker's book provokes about what knowledge is for. This highlights the poverty of Ferrucci's tightly-focused imagination.
"You can probably fit all the books that are on sale on about two terabytes that you can buy at OfficeMax for a couple of hundred dollars. You get every book. Every. Single. Book. Now what do you do? You can't read them all! What I want the computer to do," he went on, "is to read them for me and tell me what they're about, and answer my questions about them. I want this for all information. I want machines to read, understand, summarize, describe the themes, and do the analysis so that I can take advantage of all the knowledge that's out there. We human's need help. I know I do!" 
Actually, you can't get all the information period, and the best computer can't either, so calm down, David. Knowledge is not just possessing facts, nor is it analyzing them. Analysis takes place at multiple levels. Merely determining which units within a narrative are the facts is itself analysis. And just what are facts: what are the facts of Oliver Twist, for example? Is Fagin a fact? Is the theft of a pocket handkerchief a fact? Is “some more?” Facts and the juicy stuff that can be derived from them are determined by an intersection with the point-of-view of the individual using them.

To be fair, Ferrucci understands these limitations and his project embraces the challenge of finding a solution. Ultimately, the machines that we can imagine now will likely be better at generating lists for hypothesis development than they will be at making inferential leaps. The gains made in problem solving by sudden departures from the knowledge tree or standard method are legendary - that's the 'creative' part of creative problem solving and is the very stuff of the creative leap that so often precedes a solution.

Other scientists, such as Joshua Tenebaum at MIT think that one day computers will generate concepts and make inductive leaps, but that is hard to imagine reading Baker’s account of parsing the words of a single Jeopardy question well enough to determine the category of knowledge to search (let alone to answer it). A great deal more than computing speed is necessary before computers can accurately comprehend human emotion, make inferences, and take control from their inventors like Hal in 2001: A Space Odyssey. Tenenbaum said it best:

"If you want to compare [artificial intelligence] to the space program, we're at Galileo," Tenenbaum said. "We're not yet at Newton." 
Baker's book is thoughtful, informative, and really amusing, without being pseudo-science. I'm passing my copy along to my Uncle who was a contestant on Jeopardy in the 1970s. I think he should get a kick out of it.

No comments: