Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.

“Developing Robots That Can Teach Humans”

March 5th, 2012 / in big science, research horizons, Research News / by Erwin Gianchandani

On the heels of Saturday’s New York Times‘ story about iRobot, the National Science Foundation (NSF) is out with a feature today describing how a pair of computing researchers at the University of Wisconsin-Madison are programming “robot teachers” that can gaze and gesture like humans.

According to the NSF piece:

Researchers are programming robot teachers to gaze and gesture like humans [image courtesy NSF].When it comes to communication, sometimes it’s our body language that says the most — especially when it comes to our eyes.


“It turns out that gaze tells us all sorts of things about attention, about mental states, about roles in conversations,” says Bilge Mutlu, a computer scientist at the University of Wisconsin-Madison.


Blige Mutlu, University of Wisconsin-Madison, leading the gaze-programmable robots effort [image courtesy NSF].Mutlu … a human-computer interaction specialist … and his fellow computer scientist, Michael Gleicher, take gaze behavior in humans and create algorithms to reproduce it in robots and animated characters.


“These are behaviors that can be modeled and then designed into robots so that they (the behaviors) can be used on demand by a robot whenever it needs to refer to something and make sure that people understand what it’s referring to,” explains Mutlu…


Mutlu sets up experiments to study the effect of a robot gaze on humans. “We are interested in seeing how referential gaze cues might facilitate collaborative work such that if a robot is giving instructions to people about a task that needs to be completed, how does that gaze facilitate that instruction task and people’s understanding of the instruction and the execution of that task,” says Mutlu.


To demonstrate, a three-foot-tall, yellow robot in the computer sciences lab greets subjects, saying: “Hi, I’m Wakamaru, nice to meet you. I have a task for you to categorize these objects on the table into boxes.”


In one case, the robot very naturally glances toward the objects it “wants” sorted as it speaks. In another case, the robot just stares at the person. Mutlu says the results are pretty clear. “When the robot uses humanlike gaze cues, people are much faster in locating the objects that they have to move…”


The team hopes their work will transform how humanoid robots and animated characters interface with people, especially in classrooms. “We can design technology that really benefits people in learning, in health and in well-being, and in collaborative work,” notes Mutlu.

Check out a video describing how Mutlu and Gleicher are developing computational models designed to give robots and animated characters lifelike gaze behavior after the jump…

…and read the full story here.

Robotic Surgery Systems

Students with components of the Raven II surgical robotics systems [image courtesy Carolyn Lagattuta via NSF].While we’re on the subject of robotics:

A few weeks ago, we described Raven II, a new surgical robot with wing-liek arms predicated on an open source platform that can perform surgery on simulated patients — with the aims of speeding up procedures, reducing errors, and improving patient outcomes. Today, the March 2011 issue of the NSF Current newsletter describes how 7 of these Raven II systems have been shipped to major U.S. medical research labs to create a network of systems using a common platform.

Read all about it here.

(Contributed by Erwin Gianchandani, CCC Director)

“Developing Robots That Can Teach Humans”