Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Agencies Announcing First Round of NRI Awards

September 14th, 2012 / in big science, research horizons, Research News / by Erwin Gianchandani

(This post has been updated; please scroll down below for the latest.)

OSTP Blog: Agencies Investing in Research for Next-Generation Robotics [image courtesy The White House].

The first round of awards for next-generation robotics R&D through the National Robotics Initiative (NRI) are being announced today, according to White House Office of Science and Technology Policy (OSTP) deputy director for policy Tom Kalil:

Today, four federal agencies announced $40 million in grants to university researchers across the country to advance the National Robotics Initiative, unveiled by President Obama at Carnegie Mellon University on June 24, 2011.

 

The initiative, led by the National Science Foundation, is also supported by NASA, the National Institutes of Health, and the United States Department of Agriculture.  These agencies have also issued a new joint solicitationto fund an additional 25-40 awards.

 

The research projects that are being funded vividly illustrate the broad potential of robotics to help achieve important national goals, such as [following the link]:

 

  • Improving search and rescue operations in large-scale disasters;
  • Helping infants at risk of developing Cerebral Palsy learn how to walk and move;
  • Increasing the productivity of America’s manufacturing workers; and
  • Developing new capabilities for future planetary rovers.

 

Other agency announcements related to robotics that that have been made since the President’s speech include Department of Defense’s support for equipment for university robotics research, the opening of the Navy’s cutting-edge Laboratory for Autonomous Systems Research, and DARPA’s Robotics Challenge to improve disaster response operations.

In his blog post, Kalil notes the role of the Computing Community Consortium (CCC) in helping to catalyze the NRI:

The Administration decided to launch the National Robotics Initiative because:

 

  • Robotics can address a broad range of national needs such as advanced manufacturing, logistics, services, transportation,  homeland security, defense, medicine, healthcare, space exploration, environmental monitoring, and agriculture;
  • Robotics technology is reaching a “tipping point” and is poised for explosive growth because of improvements in core technologies such as microprocessors, sensors, and algorithms;
  • Robotics can play an important role in science, technology, engineering and mathematics (STEM) education because it encourages hands-on learning and the integration of science, engineering, and creative thinking; and
  • Members of the research community such as the Computing Community Consortium and program managers in key science agencies have developed a shared vision and an ambitious technical agenda for developing next-generation robotic systems that can safely work with humans and augment human capabilities.

For full details, see the OSTP announcement here.

***

Updated Friday, Sept. 14th at 2:00pm EDT: Agencies involved in the NRI are rolling out press releases this afternoon describing the first round of awards:

From NSF:

NSF … and managed the merit review process for more than 700 individual proposals requesting over $1 billion in funding. NSF’s Directorates for Computer and Information Science and Engineering; Engineering; Education and Human Resources; and Social, Behavioral, and Economic Sciences worked collaboratively with the other agencies.

 

“Harnessing the expertise of the private and public sectors, across many disciplines will advance smart technology and ultimately revolutionize key drivers of America’s productivity, economic competitiveness and quality of life,” said Farnam Jahanian, assistant director of NSF’s CISE Directorate.

 

What follows is the list of the NSF-funded projects and principal investigators leading the research from each participating university. For projects that involve multiple institutions, the lead institution (and PI) is noted with an asterisk.

 

Collaborative Research: Multilateral Manipulation by Human-Robot Collaborative Systems
*Stanford University (Allison Okamura). University of California – Santa Cruz (Jacob Rosen), Johns Hopkins University (Gregory Hager), University of California – Berkeley (Pieter Abbeel)
This project seeks to emulate the expert-apprentice relationship using human beings and robots. It focuses on developing ways in which robots can learn from human activity in order to help humans by providing more hands, eyes and brain power as necessary, enabling multilateral manipulation from multiple vantage points.  Applications in the manufacturing plant or in the operating room are potentially numerous.

 

Collaborative Research: Purposeful Prediction: Co-robot Interaction via Understanding Intent and Goals
*Carnegie-Mellon University (James Bagnell), Massachusetts Institute of Technology (Joshua Tenenbaum), University of Washington (Dieter Fox)
This project focuses on recognizing human intention–that is, teaching a robot to forecast what a human is going to do, so that robots may more effectively collaborate with humans. The inability of robots to anticipate human needs and goals today represents a fundamental barrier to the large-scale deployment of robots in the home and workplace. This project seeks to develop a new science of purposeful prediction using algorithms that may be applied to human-robot interaction across a wide variety of domains.

 

Collaborative Research: Soft Compliant Robotic Augmentation for Human-Robot Teams
*Massachusetts Institute of Technology (Daniela Rus), Harvard University (Robert Wood), the University of Colorado at Boulder (Nikolaus Correll)
This research explores the use of soft materials for robots, specifically in the design of soft compliant fingers and hands, so that humans and robots may more effectively coexist in shared environments. Made more affordable, soft manipulators can enable in-home assistants to easily and unobtrusively navigate the natural world of the elderly or incapacitated.

 

A Design Methodology for Multi-fingered Robotic Hands with Second-order Kinematic Constraints
*Idaho State University (Alba Perez Gracia), the University of California Irvine (J. Michael McCarthy)
This research focuses on the adoption and integration of specific characteristics of human hands in robots in order to accomplish a desired task, whether that entails lifting a small, unusually-shaped part for assembly or moving a bulky object. This tool will increase the ability of industry to design high performance, cost-effective multi-fingered robotic hands and other end effectors.

 

Collaborative Research: A Dynamic Bayesian Approach to Real-Time Estimation and Filtering in Grasp Acquisition and Other Contact Tasks
*Rensselaer Polytechnic Institute (Jeffrey Trinkle), State University of New York (SUNY) at Albany (Siwei Lyu)
This project is developing techniques to enable robots to grasp objects or perform other contact tasks in unstructured, uncertain environments with speed and reliability. Using the proposed method, sensor data tracks the continuous motions of manipulated objects, while models of the objects are simultaneously updated. Applications include search and rescue, planetary exploration, manufacturing, even home use with every day and important uncertainties such as effectively moving a bowl whether it is full or empty.

 

Collaborative Research: Addressing Clutter and Uncertainty for Robotic Manipulation in Human Environments
Carnegie-Mellon University (Siddhartha Srinivasa), Northwestern University (Kevin Lynch)
The long-term goal of this project is to develop personal robots that share a workspace with humans. To achieve the goal of personal robots in homes, the robots must adapt to the humans’ living space, which can be cluttered and unstructured. The models here are the messy desk or a crowded refrigerator–in both examples, a robot must be able to extract a specific item or complete another task at hand from a landscape that may also be accessed and altered by humans.

 

Collaborative Research: Assistive Robotics for Grasping and Manipulation using Novel Brain Computer Interfaces
*Columbia University (Peter Allen), University of California, Davis (Sanjay Joshi)
This project aims to make concrete some of the major goals of assistive robotics using the expertise of an assembled team of experts from the fields of signal processing and control, robotic grasping and rehabilitative medicine. The research team works to create a field-deployable assistive robotic system that will allow severely disabled patients to control a robot arm/hand system to perform complex grasping and manipulation tasks using novel brain muscle computer interfaces.

 

Collaborative Research: Multiple Task Learning from Unstructured Demonstrations
University of Massachusetts Amherst (Andrew Barto)
This research centers on programming by human demonstration.  The project is developing techniques for the efficient, incremental learning of complex robotic tasks by breaking unstructured demonstrations into reusable component skills. A simple interface that allows end-users to intuitively program robots is a key step to getting robots out of the laboratory and into human-cooperative settings in the home and workplace.

 

A Biologically Plausible Architecture for Robotic Vision
University of California-San Diego (Nuno Vasconcelos)
The project seeks to develop a vision architecture for robots based on biologically inspired examples of high level vision systems (for example a gaze or ascertaining what a human is looking at) that is both biologically plausible and jointly optimal. This system would be useful for attention, object tracking, object and action recognition in both static and dynamic environments.

 

Context-Driven Haptic Inquiry of Objects Based on Task Requirements for Artificial Grasp and Manipulation
Arizona State University (Veronica Santos)
This work focuses on the sense of touch. The project aims to advance artificial manipulators by integrating a new class of multimodal tactile sensors with artificial, humanlike hands and developing inquiry routines based on contextual touch. Weight given to each mode of tactile sensing (force, vibration, temperature) will also be tuned according to the context of the task. The research explores how to make use of this stimulus, in order to enable assistive robots to better grasp, hold and carry objects.

 

Contextually Grounded Collaborative Discourse for Mediating Shared Basis in Situated Human Robot Dialogue
Michigan State University (Joyce Chai)
This project focuses on human-robot dialogue, bridging the chasm of understanding between human partners and robots that have completely mismatched capabilities in perceiving and reasoning about the environment. This project centers on developing techniques that will support mediating the shared perceptual basis for effective conversation and task completion. With an ability to use what is known to shed light on what is not yet known (that is, using the power of inference-in situations that give clues to meaning), this research could benefit many applications in manufacturing, public safety and healthcare.

 

Cooperative Underwater Robotic Networks for Discovery & Rescue
University of Connecticut (Chengyu Cao)
This project aims to develop a cooperative underwater robotic network for exploration, discovery and rescue currently undermined by murky underwater conditions in which traditional acoustic, radio communication networks do not work. So called autonomous underwater vehicles offer inherent advantages over manned vehicles in cost and efficiency, specifically they eliminate the need for life support systems and the potential risk of human life while enabling assessment and damage mitigation after an incident under the water’s surface, such as an oil spill.

 

Core Technologies for MRI-powered Robots
Children’s Hospital Corporation (Pierre Dupont)
This project aims to produce robots that can not only tolerate an Magnetic Resonance Imaging (MRI) environment, but can use its attributes to do useful things. These include crawling inside body cavities to perform interventions or becoming robotic prosthetic implants. At the millimeter and sub-millimeter scale, groups of MRI-powered robots could swim inside fluid-filled regions of the body to perform targeted therapies, such as drug and cell delivery, or assemble as a sensor network. MRI system environments are typically challenging for robots. With the use of two testbeds at different scales, the project seeks to create a transformative robotic technology that uses MRI systems to power, command and control robots under the guidance and control of a clinician.

 

Co-Robots for STEM Education in the 21st Century
University of California-Davis (Harry Cheng)
This project studies how to use co-robot systems and math-oriented co-robotics competitions to enhance student engagement, increase student motivation in learning algebra and subsequent science, technology, engineering and mathematics (STEM) subjects, and to pique interest in pursuing STEM related careers and post-secondary study. Using a unique robotics platform, a Lego-like intelligent modular system designed for K-12 education, this project prepares teachers to engage their students with relevant pedagogy that illustrates abstract math concepts with concrete applications using computing and robotics.

 

Expert-Apprentice Collaboration
Duke University (Carlo Tomasi)
This research centers on an integration of the classic expert-apprentice relationship into robotics. It develops visual feature-based methods that allow robots to teach humans, as well as learn from them by unifying apprenticeship learning. This project hopes to tap the potential that teaching by demonstration or imitation offers as a potentially powerful and practical approach to realizing the promise of large-scale personal robotics in a wide range of applications. It will also simplify and enable non-expert users to program computers.

 

Improved safety and reliability of robotic systems by faults/anomalies detection from uninterpreted signals of computation graphs
California Institute of Technology (Richard Murray)
This research centers on detecting error conditions–that is, figuring out when things are going wrong, and/or when conditions may have been tampered with or altered by a human. This project addresses the main challenges of designing robots that can operate around humans to create systems that can guarantee safety and effectiveness, while being robust to the nuisances of unstructured environments, from hardware faults to software issues, erroneous calibration and less predictable anomalies, such as tampering and sabotage.

 

Measuring Unconstrained Grasp Forces Using Fingernail Imaging
University of Utah (Stephen Mascaro)
This project develops the technology for unconstrained, multi-fingered measurement of human grasp forces using a fingernail imaging technique. Human subjects freely choose where to place their fingers on objects, allowing for unconstrained multi-finger grasping. The co-robot then detects the individual finger forces of a human partner by ascertaining blood flow, as measured through color change on a fingernail. A co-robot trained with the appropriate calibration data could recognize and emulate or adapt to a human partner’s grasp forces, measured using only vision.

 

Mixed Human-Robot Teams for Search and Rescue
University of Minnesota-Twin Cities (Maria Gini)
This research explores how to make groups of robots behave to accomplish a common goal. The project aims at increasing the ability to respond to large-scale disasters and manage emergencies by including robots and agents as team-mates of humans in search and rescue teams. The project focuses on large teams of humans and robots that have only incomplete knowledge of the disaster situation while they accomplish the mission to rescue people and prevent fires.

 

Multifunctional Electroactive Polymers for Muscle-Like Actuation
University of California-Los Angeles (Qibing Pei)
This project aims to develop a new, softer polymer material that is electronically stimulated to behave like an artificial muscle. This offers a combination of attributes for future robotic systems including power output that outperforms human skeletal muscle, flexibility, quietness, and biocompatibility. Actuators based on the more human muscle-like material enable the design of robotic systems that more comfortably interact with people, such as assistive prosthesis or assistive devices for people with disabilities, humanoid robots for elderly in-home care, and surgical robots to save lives.

 

Multi-modal sensor skin and garments for healthcare and home robots
University of Texas at Arlington (Dan Popa)
This research seeks to build skin material that functions like human skin to give robots a learned sense of touch similar to that of humans. The objective of this research is to answer fundamental design questions for multi-functional robotic skin sensors and to optimize their placement onto assistive robotic devices. The aim is to teach the robot how to use the skin sensors efficiently, and quantitatively assess the impact of this assistive technology on humans. The research may unlock near-term and unforeseen applications of robotic skin with broad applicability, and especially to home assistance, medical rehabilitation and prosthetics.

 

Perceptually Inspired Dynamics for Robot Arm Motion
University of Wisconsin-Madison (Michael Gleicher)
This project seeks to enable a computer to learn from its own trials, errors and successes how to move and how to plan future appropriate motions. Researchers are working to develop an understanding of human perception of movement that can be applied to the development of robot trajectory planning and control algorithms, using human subjects experiments to understand and evaluate the interpretation of movements and apply these findings in robotics and motion synthesis.

 

Robot Assistants for Promoting Crawling and Walking in Children at Risk of Cerebral Palsy
University of Oklahoma Norman Campus (Andrew Fagg)
This research is developing effective robotic assistance tools to teach infants with or at risk of developing cerebral palsy how to walk and move, mitigating future deficits in cognitive development, that are considered neural side effects of this disease. This project will develop and test a sequence of robotic assistants that promote early crawling, creeping, and walking, along with a model of infant-robot interaction that encourages the continued practice of movement patterns that will ultimately lead to unassisted locomotion. The robotic assistants to be developed in this project will aid the infant in developing locomotory skills by selectively supporting a portion of his/her weight and providing artificial, rewarding locomotory experiences.

 

Robot Movement for Patient Improvement – Therapeutic Rehabilitation for Children with Disabilities
Georgia Tech Research Corporation (Ayanna Howard)
This research is focused on developing state-of-the-art techniques to facilitate the interaction necessary for robots to be useful for therapeutic rehabilitation with respect to children. Based on the logic that animate playthings naturally engage children, the goal of this project is to fuse play and rehabilitation techniques using a robotic design to induce child-robot interaction that will be entertaining as well as effective for pediatric rehabilitation. Of importance within this proposed work are approaches that allow therapists to provide instruction to robots on rehabilitation tasks that can be remapped to play behaviors specific to an individual child. In addition, robots must have internal perception and inference algorithms that allow them to learn new play behaviors and incorporate them to evoke corresponding behaviors in the child.

 

Robotic Treadmill Therapy for Lower Spinal Cord Injuries
University of Utah (John Hollerbach)
This project is developing new rehabilitation therapies for patients with incomplete lower spinal cord injuries, specifically the use of a body-weight assisted robotic treadmill that provides a realistic walking experience in a safe and flexible virtual environment. The “Treadport” overcomes limitations of current rehabilitation treadmills, which are too dissimilar from everyday walking and therefore limit a patient’s recovery. It works with the patient’s walking speed and effort, resistance to falling by strengthening and training a patient to unexpected perturbations and arm swing coordination, critical to normal walking.

 

Robust, highly-mobile MEMS micro-robots based on integration of piezoelectric and polymer materials
University of Michigan Ann Arbor (Kenn Oldham)
This project focuses on developing tiny, millimeter or sub-millimeter scale robots (smaller than fleas) whose skeletal system is composed of crystal and ceramic, which makes them highly maneuverable with stronger, mini muscles.  Prototypes will be developed, tested and perfected. These micro-robots could be eventually used to get into hard-to-reach places, and to crawl around to observe things up close or to complete a task. Applications range from exploration to surveillance, from observation to micro-surgery.

 

Spacial Primitives for Enabling Situated Human-Robot Interaction
University of Southern California (Maja Mataric)
This research centers on the creation of “social robotics”–that is, robots that interact in appropriate social ways as understood by humans, which may include deference to personal or social space, appropriate gestures and the use of verbal and physical comments. This project focuses on answering the question: how do social (speech and gesture), environmental (loud noises and low lighting), and personal (hearing and visual impairments) factors influence positioning and communication between humans and co-robots, and how should a co-robot adjust its social behaviors to maximize human perception of its social signals?

 

The Intelligent Workcell – Enabling Robots and People to Work Together Safely in Manufacturing Environments
Carnegie-Mellon University (Paul Rybski)
This research will develop an “Intelligent Workcell,” to enable people and industrial robots to work safely and more efficiently within the same workspace. New capabilities in robotic workcell monitoring will likely result. Smart work environments know where you are and what you need, and what you’re doing to avoid hindrance and to support assistance.

 

Virtualized Welding: A New Paradigm for Intelligent Welding Robots in Unstructured Environment
University of Kentucky Research Foundation (Ruigang Yang)
Zeroing in on welding done with widespread use as a manufacturing component and done by highly skilled workers, this project will develop a new robotic platform with novel 3D modeling and visualization algorithms designed to complement the skills and expertise of a human welder with advanced sensing tools of a robotic one. The primary use for this new technology is in manufacturing. Successful completion of the proposed project paves the foundation for intelligent welding robots with closed-loop intelligent control. Such a robotic system can perform high-speed and high-precision welding while allowing more variations in the work pieces and environments. In addition, virtualized welding can be integrated with a mobile platform to allow welding in places that are hazardous or unsuitable for human welders.

 

Human-Robot Collectives as a Curriculum-Wide CS Learning Platform
Rochester Institute of Technology (Zack Butler)
This project is an effort to re-conceptualize what it means to study computer science at the undergraduate level. The project team will design a sequence of computer science courses that integrate the use of a network of robots to facilitate student learning. In this project, the co-robot teams share space and tasks with humans and are used as a teaching platform in an introductory context, and later, as a laboratory platform for projects in intermediate and upper-level courses in which students can develop and even invent new services. This approach enhances a traditional approach to teaching computer science and provides ample opportunities for students to design, test, and evaluate using co-robot systems.

 

Managing Uncertainty in Human-Robot Cooperative Systems
Johns Hopkins University (Peter Kazanzides)
This research aims to capitalize on the distinct yet different strengths of humans (perception and reasoning) and machines (precision, accuracy and repetitiveness in information fusion, task planning and simulation) to design truly cooperative systems, managing uncertainty to achieve successful human-robot partnerships to perform complex tasks in uncertain environments.  It will build manufacturing and medical testbeds (for minimally invasive surgery during which slight variations such as tremors, twitches or breaths can affect conditions) on which cooperative skills will be applied and tested.

 

A Novel Light-weight Cable-driven Active Leg Exoskeleton (C-ALEX) for Training of Human Gait
University of Delaware (Sunil K. Agrawal)
This research intends to improve gait training rehabilitation for stroke patients.  It will help victims of stroke learn how to walk again. Unlike “clamp robots” that affix to body parts to bolster a patient’s muscle strength, this robotic machine is more of a lightweight, puppet system that reduces muscle burden.  The person engaged in rehabilitation does not feel anything that is unlike their own limbs (no artificial encumbrance, nothing bulky or uncomfortable). This approach seeks to overcome the tendency of those engaged in rehab using weights to forget what they learned during rehab once the weight is removed–the weight acts as a literal and figurative crutch. This could mean hope for some 700,000 Americans who experience a stroke each year and the 4.5 million people in the United States who today experience the after-effects of a stroke.

And from NASA:

NASA has selected eight advanced robotics projects that will enable the agency’s future missions while supporting the Obama administration’s National Robotics Initiative.

 

The projects, ranging from technologies for improving robotic planetary rovers to humanoid robotic systems, will support the development and use of robots for space exploration, as well as by manufacturers and businesses in the United States.

 

Robots can work beside, or cooperatively, with people to enhance individual human capabilities, performance and safety in space as well as here on Earth. Co-robotics, where robots work cooperatively with people to enhance their individual human capabilities, performance and safety, is a valuable tool for maintaining American leadership in aerospace technology and advanced manufacturing.

 

“Robonaut, NASA’s robotic crewmember aboard the International Space Station, is being tested to perform tasks to assist our astronauts and free them up to do the important scientific research and complex engineering taking place each day on our orbiting national lab,” said NASA Chief Technologist Mason Peck at NASA Headquarters in Washington. “Selected through our participation in the National Robotics Initiative, these new projects will support NASA as we plan for our asteroid mission in 2025 and the human exploration of Mars around 2035.”

 

The proposals NASA has selected for development are:

 

— “Toward Human Avatar Robots for Co-Exploration of Hazardous Environments,” J. Pratt, principal investigator, Florida Institute of Human Machine Cognition, Pensacola

 

— “A Novel Powered Leg Prosthesis Simulator for Sensing and Control Development,” H. Herr, principal investigator, Massachusetts Institute of Technology, Cambridge

 

— “Long-range Prediction of Non-Geometric Terrain Hazards for Reliable Planetary Rover Traverse,” R. Whittaker, principal investigator, Carnegie Mellon University, Pittsburgh

 

— “Active Skins for Simplified Tactile Feedback in Robotics,” S. Bergbreiter, principal investigator, University of Maryland, College Park

 

— “Actuators for Safe, Strong and Efficient Humanoid Robots,” S. Pekarek, principal investigator, Purdue University

 

— “Whole-body Telemanipulation of the Dreamer Humanoid Robot on Rough Terrains Using Hand Exoskeleton (EXODREAM),” L. Sentis, principal investigator, University of Texas at Austin

 

— “Long, Thin Continuum Robots for Space Applications,” I. Walker, principal investigator, Clemson University, Clemson, S.C.

 

— “Manipulating Flexible Materials Using Sparse Coding,” R. Platt, principal investigator, State University of New York, Buffalo

 

 

NASA has a long history of developing cutting-edge robotic systems for use in space exploration. NASA also partners with American businesses, universities and other federal agencies to transfer those technologies back into the nation’s industrial base, improving manufacturing capabilities and economic competitiveness…

More as we hear it…

(Contributed by Erwin Gianchandani, CCC Director)

Agencies Announcing First Round of NRI Awards