I have written a number of times here about the upcoming match on Jeopardy!, the popular TV game show, between IBM’s Watson computer system and past (human) champions Ken Jennings and Brad Rutter, scheduled to air February 14-16. This weekend, the Wall Street Journal site has an article by Yale computer science Professor David Gelernter looking st the approach Watson uses to find answers to the Jeopardy! clues. As I’ve discussed before, Watson uses massively parallel processing, employing 2,880 processor cores and a database containing the equivalent of about 200 million pages of content, using many different search and selection algorithms. Prof. Gelernter likens th e approach to having a team of specialists working on the problem.
Watson throws a mob of cooperating software specialists at the task of playing “Jeopardy” — the TV game show that tests your recall of millions of disconnected, generally useless facts plus your understanding of coy, arch or wrenchingly cutesy clues. Winning at “Jeopardy” is not as deep a problem as philosophy or mathematics, but it’s much closer to the demands of real life.
… as the specialists work, they dip into a massive database of electronic articles, news stories, books, screenplays, encyclopedias and whatnot that have been dumped whole into a vast boiling vat of computer memory. Thus the dirt-cheap fuel of computer power stokes the roaring blaze of artificial intelligence, and lights up the future.
The really important point that Gelernter makes is that this kind of problem solving is, in many ways, much closer to the kind of thinking that humans do than it is to, for example, the formal proofs of mathematics. This, in turn, means that the approach used in Watson has considerably more potential for real-world applications than the Deep Blue chess-playing computer that beat world champion Garry Kasparov.
One potential application, suggested by both Prof. Gelernter and David Ferruci, Watson team leader at IBM, is medical diagnosis. A new system (perhaps “Dr. Watson”, this time in honor of Dr. John H.) could use a database comprising an enormous trove of medical literature. The array of software “specialists” that processed the information might include some that looked for very rare or unprecedented combinations of symptoms and circumstances; in routine cases, those specialists might not have much to contribute. But, as Prof. Gelernter says, that’s OK.
… as I wrote twenty years ago, if each sees action “every five months (or every five decades), but it tells us something interesting when it does, that’s fine.”
It has always seemed to me that making real progress in artificial intelligence was likely to require, not more and more clever and complicated logic, but a parallel approach that was more similar to the kind of processing observed in actual intelligent brains. AS Prof. Gelernter observes, Watson is still a long way from being able to pass the Turing test; nonetheless, Watson’s approach is probably the right one.
But when a program does pass the Turing test, it’s likely to resemble a gigantic Watson.
I’m really looking forward to watching this unfold.