Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


First Person: Pattie Maes on the Future of HCI

June 26th, 2012 / in big science, research horizons, Research News / by Erwin Gianchandani

Pattie Maes and Natan Linder, a research student at the MIT Media Lab, inspect a novel interface device that Linder created [image courtesy MIT Media Lab/Fluid Interfaces group via the Technology Review].On the heels of Francis Collins’s Scientific American article about mobile health apps, Technology Review has an interesting interview with MIT Media Lab associate professor Pattie Maes about the future of human-computer interaction in light of recent advances in mobile technologies:

What will smart phones be like five years from now?

 

Phones may know not just where you are but that you are in a conversation, and who you are talking to, and they may make certain information and documents available based on what conversation you’re having. Or they may silence themselves, knowing that you’re in an interview [more following the link].

 

They may get some information from sensors and some from databases about your calendar, your habits, your preferences, and which people are important to you.

 

Once the phone is more aware of the user’s current situation, and the user’s context and preferences and all that, then it can do a lot more. It can change the way it operates based on the current context.

 

Ultimately, we may even have phones that constantly listen in on our conversations and are just always ready with information and data that might be relevant to whatever conversation we’re having.

 

How will mobile interfaces be different?

 

Speech is just one element. There may be other things — like phones talking to one another. So if you and I were to meet in person, our phones would be aware of that and then could make all the documents available that might be relevant to our conversation, like all the e-mails we exchanged before we engaged in the meeting.

 

Just like if you go to Google and do a search, all the ads are highly relevant to the search you’re doing, I can imagine a situation where the phone always has a lot of recommendations and things that may be useful to the user given what the user is trying to do.

 

Another idea is expanding the interaction that the user has with the phone to more than just touch and speech. Maybe you can use gestures to interact. Sixth Sense, which we built, can recognize gestures; it can recognize if something is in front of you and then potentially overlay information, or interfaces, on top of the things in front of you.

Check out the full interview — including Maes’s thoughts on Project Glass, Google’s augmented-reality project — here.

And here’s a shameless plug: check out an excellent talk (below) on this subject by CCC Council Member Beth Mynatt from this past February’s symposium marking two decades of the Federal Networking and Information Technology Research and Development (NITRD) Program. As Beth notes, “We are at the beginning of a tremendous partnership between people and computing.”

(Contributed by Erwin Gianchandani, CCC Director)

First Person: Pattie Maes on the Future of HCI

Comments are closed.