Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


With the Score “Computer 1, Humans 0,” Focus Shifts to Practical Uses

February 18th, 2011 / in research horizons / by Erwin Gianchandani

JEOPARDY! IBM Challenge (Photo courtesy AP/JEOPARDY! Productions)Decimated.

Vanquished.

Demolished.

As you probably know by now, those are but a few of the words being used to describe how “Watson” — the IBM question-answering supercomputer system — bested its two competitors in a three-part JEOPARDY! series earlier this week.  What’s more, “Watson” wasn’t battling just anyone; the machine — with a friendly voice and cool avatar for television audiences — defeated the two best humans ever to play the game, Ken Jennings and Brad Rutter.

“Watson” ended the two-game exhibition with a score of 77,147 USD (including a number of Daily Double and Final Jeopardy wagers that drew laughter for their highly specified form).  Meantime, Jennings notched 24,000 USD, and Rutter reached 21,600 USD.  (The majority of the prize monies — 1 million to first place, 300,000 to second, and 200,000 to third — will be donated to charities.)

There’s been quite a bit of popular press about “Watson” all week — including speculation about what this victory means for AI, machine learning, natural language processing, etc., as well as various real-world settings spanning healthcare, the smart grid, the financial sector, and the like.

Yesterday, the Associated Press reported that IBM has worked out agreements with Columbia University’s Medical Center and the University of Maryland’s School of Medicine — agreements that will bring the software underlying “Watson” into clinical settings as a decision support tool.  Quoting the AP story:

…It holds promise for doctors and hedge fund managers and other industries that need to sift through large amounts of data to answer questions.

Eliot Siegel, a professor at the Maryland university’s medical school, said other artificial intelligence programs for hospitals have been slower and more limited in their responses than Watson promises to be. They have also been largely limited by a physician’s knowledge of a particular symptom or disease.

“In a busy medical practice, if you want help from the computer, you really don’t have time to manually input all that information,” he said.

Siegel says Watson could prove valuable one day in helping diagnose patients by scouring journals and other medical literature that physicians often don’t have time to keep up with.

Yet the skills Watson showed in easily winning the three-day televised “Jeopardy!” tournament Wednesday also suggests shortcomings that have long perplexed artificial intelligence researchers and which IBM’s researchers will have to fix before the software can be used on patients.

“What you want is a system that understands you’re not playing a quiz game in medicine and there’s not one answer you’re looking for,” Siegel said.

“In playing ‘Jeopardy!’, there is one correct answer. The challenge we have in medicine is we have multiple diagnoses and the information is sometimes true and sometimes not true and sometimes conflicting. The Watson team is going to need to make the transition to an environment in which it comes up with multiple hypotheses – it will be a really interesting challenge for the team to be able to do that.”

Siegel said it would likely be at least two years before Watson will be used on patients at his hospital. It will take that much time to train the program to understand electronic medical records, feed it information from medical literature, and test whether what it’s learned leads to accurate analyses of patient symptoms.

He said he wasn’t bothered by Watson’s on-screen blunders; even highly trained medical professionals make dumb mistakes.

“I will take an assistant that is that fast and that powerful and that tireless any time,” he said. “This is going to be something that 10 years from now will be a completely accepted way that we wind up practicing.”

The full AP story is available here.

It’s certainly worth watching where “Watson” goes next!

(Contributed by Erwin Gianchandani, CCC Director)

With the Score “Computer 1, Humans 0,” Focus Shifts to Practical Uses