Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


“Google, Microsoft Talk Artificial Intelligence”

November 28th, 2011 / in big science, research horizons / by Erwin Gianchandani

(This post has been updated; please scroll down for the latest.)

Meeting of the minds: Peter Norvig (top) and Eric Horvitz agree that AI is a key to the future of technology [image courtesy Bart Nagel (top) and Microsoft (bottom), via Technology Review].MIT’s Technology Review has an in-depth interview with Peter Norvig, Google’s Director of Research, and Eric Horvitz, a Distinguished Scientist at Microsoft Research (and a member of the CCC Council), about their optimism for the future of AI:

Google and Microsoft don’t share a stage often, being increasingly fierce competitors in areas such as Web search, mobile, and cloud computing. But the rivals can agree on some things — like the importance of artificial intelligence to the future of technology.

 

[Norvig and Horvitz] recently spoke jointly to an audience at the Computer History Museum in Palo Alto, California, about the promise of AI. Afterward, the pair talked … about what AI can do today, and what they think it’ll be capable of tomorrow…

 

Technology ReviewYou both spoke on stage of how AI has been advanced in recent years through the use of machine-learning techniques that take in large volumes of data and figure out things like how to translate text or transcribe speech. What about the areas we want AI to help where there isn’t lots of data to learn from?

 

Peter Norvig: What we’re doing is like looking under the lamppost for your dropped keys because the light is there. We did really well with text and speech because there’s lots of data in the wild. Parsing [breaking down the grammatical elements of sentences] never naturally occurs, perhaps in someone’s linguistics homework, so we have to learn that without [labeled] data. One of my colleagues is trying to get around that by looking at which parts of online text have been made links—that can signal where a particular part of a sentence is.

 

Eric Horvitz: I’ve often thought that if you had a cloud service in the sky that recorded every speech request and what happened next—every conversation in every taxi in Beijing, for example—it could be possible to have AI learn how to do everything.

 

More seriously, if we can find ways to capture lots of data in a way that preserves privacy, we could make that possible…

 

As we see more direct evidence of AI in real life, for example, Siri, it seems that a kind of design problem has been created. People creating AIs need to make them palatable to our own intelligence.

 

Norvig: That’s actually a set of problems at various levels. We know the human vision system and what making buttons different colors might mean, for example. At a higher level, the expectations in our head of something and how it should behave are based on what we think it is and how we think of its relationship to us.

 

Horvitz: AI is intersecting more and more with the field of computer human interaction [studying the psychology of how we use and think about computers]. The idea that we will have more intelligent things that work closely with people really focuses attention on the need to develop new methods at the intersection of human intelligence and machine intelligence…

 

What do we need to know more about to make AIs more compatible with humans?

 

Horvitz: One thing my research group has been pushing to give computers is a systemwide understanding of human attention, to know when best to interrupt a person. It’s been a topic of research between us researchers and the product teams.

 

Norvig: I think we also want to understand the human body a lot more, and you can see in Microsoft’s Kinect a way to do that. There’s lots of potential to have systems understand our behavior and body language…

 

Can you tell me one recent demo of AI technology that impressed you?

 

Norvig: I read a paper recently by someone at Google about to go back to Stanford about unsupervised learning, an area where the curves of our improvement over time have not looked so good. But he’s getting some really good results, and it looks like learning when you don’t know anything in advance could be about to get a lot better.

 

Horvitz: I’ve been very impressed by apprentice learning, where a system learns by example. It has lots of applications. Berkeley and Stanford both have groups really advancing that: for example, helicopters that learn to fly on their backs [upside-down] from [observing] a human expert.

Check out the full interviews in the original Technology Review story.

***

Updated Tuesday, Nov. 29 at 8:15am EST: Technology Review‘s interview excerpted above was conducted shortly after a special event at the Computer History Museum — titled “The Challenge and Promise of Artificial Intelligence: A Bay Area Science Festival Wonder Dialog” — earlier this month. The museum has now posted the full video:

(Contributed by Erwin Gianchandani, CCC Director)

“Google, Microsoft Talk Artificial Intelligence”

Comments are closed.