Back in May of last year, I wrote a couple of posts here about an IBM project to build a software system that could be a successful contestant on Jeopardy!, the popular, long-running TV game show. IBM has already managed, in 1997, to have its software, running on its Deep Blue supercomputer, win a chess match against Garry Kasparov. Jeopardy! is in many ways a tougher nut to crack: the clues are given in natural language (clues are categorized), and the contestant must come up with a question that the clue answers. For example, a recent category was “The 50 US States”, and the clue was, “The only state with a two-word name where neither word occurs in any other state name..” The correct response is, “What is Rhode Island?” (As a long-time viewer of Jeopardy!, I’d characterize that as a fairly easy question.) Another example, from many years ago, was in the category “Words”; the clue was “A moral reservation, or an apothecary’s unit”. Answer: “What is a scruple?”
The New York Times now has a magazine preview article up that gives an overview and progress update on the project. The IBM team knew from the outset that it was tackling a tough assignment.
Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.
Playing the game successfully requires not only a large store of factual knowledge, but the ability to identify relationships and links quickly, often on the basis of word play in the categories or clues. There are existing systems that answer natural language questions (notably Wolfram Alpha, developed by Steve Wolfram), but they rely on carefully constructed data bases crafted to include the links necessary to answer particular types of questions.
IBM apparently now thinks that the system, which runs on a Blue Gene supercomputer and is called Watson (after Thomas, not Dr. John H.), is close to being ready for a public test. The company has been running in-house matches against human contestants, and gradually refining the set of algorithms that Watson employs. One difference from some previous “artificial intelligence” approaches is that Watson uses a large number of algorithms to look for relationships, and takes a statistical view of the world, trying to determine which potential answers are most likely to be right. This kind of approach is made feasible, in part, because the development of the Internet has made an enormous body of written material, of all kinds, available in digital form. And, of course, much cheaper processing power and memory capacity mean than Watson can “learn” from a truly immense “textbook”.
Unlike its Deep Blue chess software, which impressed many but had little commercial application, IBM sees the technology in Watson as something that might be very applicable to real-world systems.
John Kelly, the head of I.B.M.’s research labs, says that Watson could help decision-makers sift through enormous piles of written material in seconds. Kelly says that its speed and quality could make it part of rapid-fire decision-making, with users talking to Watson to guide their thinking process.
One idea mentioned in the article is a medical diagnostic “assistant”, which could help doctors cope with the constant stream of new information on diseases and treatments.
He [Kelly] imagines a hospital feeding Watson every new medical paper in existence, then having it answer questions during split-second emergency-room crises. “The problem right now is the procedures, the new procedures, the new medicines, the new capability is being generated faster than physicians can absorb on the front lines and it can be deployed.”
The producers of Jeopardy! have agreed to have a special televised match between Watson and selected former winners, possibly as early as this fall. It should be fascinating to watch.