Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Watson Outpaces Jeopardy Wizards in Sneak Preview

January 13th, 2011 / in research horizons / by Erwin Gianchandani

"Watson" (photo from Engadget.com)“What is Jericho?”

Those were the first words from “Watson,” the IBM supercomputer system that’s taking on Ken Jennings and Brad Rutter — the two winningest players in “Jeopardy!” history — this week.

Minutes later, with three categories of questions completed as part of this morning’s dry run, Watson was winning:  the supercomputer had $4,400; Jennings trailed with $3,400; and Rutter was third with $1,200.

If those first few minutes are any indication of what the actual game shows (which will be taped beginning tomorrow) are going to be like, we could be in for a truly fascinating man v. man v. machine matchup when the shows hit the airwaves February 14-16.

As we’ve covered in this space before (here and here), “Watson” is the result of years of research in areas like natural language processing, AI/machine learning, and so on.  As described in an Engadget.com article recounting this morning’s sneak preview:

Watson has thousands of algorithms it runs on the questions it gets, both for comprehension and for answer formulation. The thing is, instead of running these sequentially and passing along results, Watson runs them all simultaneously and compares all the myriad results at the end, matching up a potential meaning for the question with a potential answer to the question. The algorithms are backed up by vast databases, though there’s no active connection to the internet — that seems like it would be cheating, in Jeopardy terms.

Much of the brute force of the IBM approach (and why it requires a supercomputer to run) is comparing the natural language of the questions against vast stores of literature and other info it has in its database to get a better idea of context — it has a dictionary, but dictionary definitions of words don’t go very far in Jeopardy or in regular human conversation. Watson learns over time which algorithms to trust in which situation (is this a geography question or a cute pun?), and presents its answers with a confidence level attached — if the confidence in an answer is high enough, it buzzes in and wins Trebek Dollars.

Be sure to check out video from this morning’s event!

(Contributed by Erwin Gianchandani, CCC Director)

Watson Outpaces Jeopardy Wizards in Sneak Preview