Archive for the ‘research horizons’ category


Great Innovative Idea- Acquiring Object Experiences at Scale

October 7th, 2015

Stefanie Tellex

The following Great Innovative Idea is from John Oberlin, Maria Meier, Tim Kraska, and Stefanie Tellex in the Computer Science Department at Brown University.

Their Acquiring Object Experiences at Scale was one of the winners at the Computing Community Consortium (CCC) sponsored Blue Sky Ideas Track Competition at the AAAI-RSS Special Workshop on the 50th Anniversary of Shakey: The Role of AI to Harmonize Robots and Humans in Rome, Italy. It was a half day workshop on July 16th during the Robotics Science and Systems (RSS) 2015 Conference.

The Innovative Idea

Baxter is a two armed manipulator robot which which is gaining popularity in the research and industrial communities. At the moment, there are around 300 Baxters being used for research around the world. In our paper we proposed a wide scale deployment of our recent software Ein, which runs on Baxter, to build and share a database which describes how to recognize, localize, and manipulate every day objects. If all 300 research Baxters ran Ein continuously for 15 days, we could scan one million objects. Robot time is valuable. We would not ask the research community to sacrifice their daylight hours on our project, so we designed Ein to run autonomously. Running Ein, Baxter can scan a series of objects without human interaction, so we envision that a participating lab could leave a pile of objects on a table next to their Baxter when they leave the lab for the night, returning the next morning to find the objects scanned and put away. With this level of automation, the human burden is transferred from operating the robot to finding the objects, a substantially less tedious task. The Million Object Challenge is our effort to collaborate with the broader Baxter community in order to scan and manipulate on million objects.


Big data has a potential impact on every research community. Robots have been manipulating objects for a long time, but until recently this has involved a lot of manual input from a human operator. Ein enables Baxter to collect images and 3D data about an object so that it can tell it apart from other objects, determine its pose, and attempt to grasp it. Beyond this, Ein uses feedback from interacting with the object to adapt to difficult situations. Even if an object has surface properties which make it difficult to image, or physical properties which make initial grasp attempts unsuccessful, Baxter can practice localizing and manipulating the object over a period of time and remember the approaches which were successful. This autonomous behavior allows the collection of data on a whole new scale, opening up new possibilities for robotic research. Successful methods in machine learning (such as SVMs, CRFs, and CNNs) are powerful but require massive amounts of data. Applying these methods to their full capability is next to impossible when data is collected arduously by humans. By collecting data on a large scale, we can begin to tackle category level inference in object detection, pose estimation, and grasp proposal in ways that have never been done before.

Other Research

Stefanie heads the Humans to Robots Laboratory in the Computer Science department at Brown. The Million Object Challenge constitute only one of three main efforts being undertaken there. The second effort is the Multimodal Social Feedback project, which was spearheaded by Miles Eldon (now graduated) and is being actively developed by Emily Wu, both advanced undergraduates at the time of their work. The Social Feedback Project seeks to coordinate human and robot activities by allowing both parties to communicate with each other through speech and gesture. This allows a human to connect with Baxter using the same multimodal channels that they would use to communicate with a fellow human. Social feedback allows the robot to communicate with humans, and the results of that communication are fed to Ein, where it is used to specify interactions with objects. Every object scanned for the Million Object Challenge is another object which can be included in human to robot communications.

The third effort is called Burlapcraft, and is carried by undergraduate researcher Krishna Aluru and postdoctoral researcher James MacGlashan. Krishna created a mod for the popular and versatile game Minecraft which allows the application of James’ reinforcement learning and planning toolkit BURLAP within Minecraft dungeons. Burlapcraft has direct experimental applications within Minecraft, but also enables the collection of data which can be applied to other domains.

Researcher’s Background

Stefanie and John have both been fascinated with the idea of intelligent machines for their entire lives. Whereas Stefanie has sought to build a machine capable of holding a decent conversation, John has focused on the more primitive skills of sight and movement. These complementary priorities come together in their multimodal research, treating the robot as an entire entity.

Stefanie has a strong background in Computer Science and Engineering from MIT, where she has earned multiple degrees including her doctorate. Now she is an assistant professor in the Computer Science Department at Brown, where she continues her award winning work. John has studied Mathematics and Computer Science at FSU, UC Berkeley, University of Chicago, and Brown, where he is working on his doctorate.


Stefanie’s Website-

To view more Great Innovative Ideas, please click here.

Rise of Concerns About AI: Reflection and Directions

October 1st, 2015

CACM coverTom Dietterich and Eric Horvitz, the current and former president of the Association for the Advancement of Artificial Intelligence (AAAI), respectively, have co-authored a CACM Viewpoint on the Rise of Concerns of AI: Reflection and Directions, now openly available in the October issue of CACM. Tom Dietterich is the Distinguished Professor and Director of Intelligent Systems at Oregon State’s School of Electrical Engineering and Computer Science and Eric Horvitz is the Distinguished Scientist & Managing Director at Microsoft Research and former CCC Council Member.

Drs. Dietterich and Horvitz reflect about the recent rise of anxieties about AI in public discussions and media. They discuss the realities about progress in AI and carefully elucidate several different categories of risk. They highlight relevant research in AI and other computer-science disciplines including human-computer interaction, verification, and security, and call for more intensive investments on key opportunities for R&D to better understand and address the different risks.

In addition to pursuing scholarly studies, Drs. Dietterich and Horvitz stress the need for computer science professionals to continue to maintain two-way channels for listening to and communicating with the public “…about opportunities, concerns, remedies, and realities of AI.”

To learn more, please read the full article here.

CCC Whitepaper- Systems Computing Challenges in the Internet of Things

September 28th, 2015

The Computing Community Consortium (CCC) Computing in the Physical World Task Force has just released a community whitepaper on Systems Computing Challenges in the Internet of Things.

The Task Force, lead by CCC Council Member Ben Zorn from Microsoft Research, is looking at core research challenges that the Internet of Things (IoT) presents. This whitepaper highlights these challenges and provides recommendations that will help address inadequacies in existing systems, practices, tools, and policies.

The recommendations are summarized below:

  • Invest in research to facilitate the construction, deployment, and automated analysis of multi-component systems with complex and dynamic dependences. IoT systems by their nature will have dynamic membership and operate in unknown and unpredictable environments that include, by assumption, adversarial elements.
  • Going beyond formal methods research (historically focusing on software and CPS) to create abstractions and formalisms for constructing and reasoning about systems with diverse and more difficult-to-characterize components such as human beings, machine learning models, data from crowds, etc.
  • Support research that addresses the core underlying scientific and engineering principles dealing with large-scale issues, networking, security, privacy, impact of the physical on the cyber, real-time, and the other key questions raised in this document.
  • Industry is application-focused and usually targets a single domain (health care, transportation, etc.). Support research that considers architectures and solutions that transcend specific application domains.
  • Support research on the unique challenges and opportunities in IoT security, such as minimal operating systems to create IoT devices with smaller attack surfaces, new ways detect and prevent anomalous network traffic, and high-level policy languages for specifying permissible communication patterns.
  • Invest in research in cyber-human systems that reflect human understanding and interaction with the physical world and (semi) autonomous systems.

To learn more, please read the entire whitepaper

Excitement around K-12 CS Education, but there’s work to be done by the CS Community

September 22nd, 2015

Ranshapeimage_1The following is a blog post by Ran Libeskind-Hadas, R. Michael Shanahan Professor and Computer Science Department Chair at Harvey Mudd College, Co-Chair of CRA’s Education subcommittee (CRA-E), and former Computing Community Consortium (CCC) Council Member and Debra Richardson, founding Dean of the UC Irvine Bren School of Information and Computer Science and CCC Council Member.

Mayor Bill de Blasio announced this week that every public school in New York City- elementary through high school – must offer computer science courses to all students within ten years. It is estimated that fewer than 10% of schools in New York City currently offer a CS course and only 1% of students take such a course. CS will not be required of all students, but the opportunity to take a CS course will be available in every school.

Likewise, San Francisco Unified School District announced last month that it would add computer science instruction for all students at every grade level, beginning as early as preschool. And, the Chicago Public Schools are implementing a K-12 computer science curriculum and will make computer science a graduation requirement by 2019.
It seems inevitable that the initiatives by New York, San Francisco and Chicago will encourage other cities to follow suit.

According to an article in the New York Times, about 5000 NYC teachers will need to be trained to meet Mayor de Blasio’s initiative. As similar initiatives are adopted elsewhere, the demand for curricula and pre- and in-service teacher training will grow dramatically.

The computer science community must be proactive in developing curricula and training teachers for these initiatives. Good curricula and teacher training can showcase the intellectual beauty of our field, demonstrate its relevance to society, and provide students with valuable skills that they can leverage in their other academic subjects and use to express their creativity.

Getting this right requires that we invest seriously in computer science education research at the university level. We need high-quality research in computer science pedagogy and best teaching practices. We need excellent pre-service and in-service teacher training. We need to take a close look at what physics, mathematics, and other communities have done in education research and teacher training.

The Computing Community Consortium (CCC) will release a whitepaper later this fall making the case for computer science departments to invest in education research, describing some of the major intellectual challenges in the field, and proposing strategies for building strength in this vitally important field. Stay tuned!

Cache or Scratchpad? Why choose?

September 8th, 2015

markhill2006-mediumThe following is a special contribution to this blog by CCC Executive Council Member Mark D. Hill of the University of Wisconsin-Madison. Full disclosure: He had the pleasure of working with one of the authors of the discussed paper—Sarita Adve—on her 1993 Ph.D.

Great conundrums include:
* Will I drink coffee or tea?
* Shall I have cake or ice cream?
* Should I use a cache or scratchpad?

While most readers will not face the last choice, it is important for saving time and energy in the devices we love by keeping frequently-used information close at hand.

Caches are the workhorse of modern computers, feeding the processor with data about 100X faster than main memory. Decades of hardware research has found clever ways to determine what data to keep in the cache, how to find it, and when to throw it out. The magic of caches is that this cleverness has been hidden almost entirely from software.

But this cleverness costs energy. On every load and store. And it is not always successful.

A scratchpad moves the burden of managing fast accesses to software. Its hardware memory structure is simple and efficient. But its software use is not. Software must explicitly move data in and out of a different scratchpad address space and take responsibility for keeping coherent multiple copies in different address spaces. In practice, this is inefficient and scratchpads have remained an enticing but niche solution.

Researchers at Illinois address this conundrum in a paper presented at the International Symposium of Computer ArchitectureStash: Have Your Scratchpad and Cache it Too. Besides offering the proverbial cake as an audience prize, the presentation described a new memory organization called stash that gets the best of caches and scratchpads. The stash redistributes the hardware-software burden – software determines what data should go into the stash, but relies on hardware smarts for address space conversion and coherence. Hits in the stash are as efficient as a scratchpad, but (infrequent) misses incur a small hardware penalty. By empowering hardware and software to do what each does best, stash improves both performance and energy.

Stash may not be the final answer, but the paper asks the right question. It also adds to a growing body of recent work imploring hardware and software designers to rethink their distribution of work.

Robin Murphy’s TED Talk on Disaster Robotics

September 3rd, 2015

Robin Murphy, Texas A&M UniversityTexas A&M University‘s Raytheon Professor of Computer Science and Engineering and former Computing Community Consortium (CCC) Council Member, Robin Murphy recently gave a TED talk on Disaster Robots.

Robots don’t replace people or dogs…They do things new. They assist the responders, the experts, in new and innovative ways.

Robin Murphy, explains how if you can reduce the initial emergency response by one day, you can reduce the overall recovery by 1000 days.

If the initial responders can get in, save lives… that means the other groups can get in to restore the water, the roads, the electricity, which means then the construction people, the insurance agents, all of them can get in to rebuild the houses, which then means you can restore the economy…robots can make a disaster go away faster.


Murphy was the chair of the CCC Computing for Disaster Management Workshop that produced the CRICIS Report, as well as the author of the CCC-led white paper Toward a Science of Autonomy for Physical Systems: Disaster.