Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Computer Science Projects Among Popular Mechanics’ Breakthrough Awardees

October 4th, 2012 / in awards, Research News / by Kenneth Hines

Popular Mechanics, the American Magazine which features regular articles on science and technology, released their annual breakthrough awardees earlier this week.  These awards highlight innovations that have the potential to make the world smarter, safer and more efficient. A total of ten awards were announced and at least four of the awardees feature computer science research. Four of these projects are featured below, all awardees are listed on Popular Mechanics’ webpage.

Popular Mechanics' Breakthrough Award [credit: Popular Mechanics]

 

MABEL, Teaching Robots to Walk Jessy Grizzle, University of Michigan, Ann Arbor and Jonathan Hurst, Oregon State University

Walking, that fundamental human activity, seems simple: Take one foot, put it in front of the other; repeat. But to scientists, bipedalism is still largely a mystery, involving a symphony of sensory input (from legs, eyes, and inner ear), voluntary and involuntary neural commands, and the synchronized pumping of muscles hinged by tendons to a frame that must balance in an upright position. That makes building a robot that can stand up and walk in a world built for humans deeply difficult.

 

But it’s not impossible. Robots such as Honda’s ASIMO have been shuffling along on two feet for more than a decade, but the slow, clumsy performance of these machines is a far cry from the human gait. Jessy Grizzle of the University of Michigan, Ann Arbor, and Jonathan Hurst of Oregon State University have created a better bot, a 150-pound two-legged automaton named MABEL that can walk with a surprisingly human dexterity. MABEL is built to walk blindly (without the aid of laser scanners or other optical technologies) and fast (it can run a 9—minute mile). To navigate its environment, MABEL uses contact switches on its “feet” that send sensory feedback to a computer. “When MABEL steps off an 8-inch ledge, as soon as its foot touches the floor, the robot can calculate more quickly and more accurately than a human the exact position of its body,” explains Grizzle. MABEL uses passive dynamics to walk efficiently—storing and releasing energy in fiberglass springs—rather than fighting its own momentum with its electric motors.
The quest for walking robots is not purely academic. The 2011 Fukushima Daiichi nuclear disaster highlighted the need for machines that could operate in hazardous, unpredictable environments that would stop wheeled and even tracked vehicles. Grizzle and Hurst are already working on MABEL’s successor, a lighter, faster model named ATRIAS. But there’s still plenty of engineering to be done before walking robots can be usefully deployed, walking into danger zones with balance and haste but no fear.

IBM Blue Gene/Q Sequoia SupercomputerBruce Goodwin, Michel McCoy, Lawerence Livermore National Laboratory, IBM Research and IBM Systems & Technology Group

What is it? Sequoia, an IBM Blue Gene/Q supercomputer newly installed at Lawrence Livermore National Laboratory (LLNL) in Livermore, Calif. In June it officially became the most powerful supercomputer in the world.
How powerful are we talking about? Sequoia is currently capable of 16.32 petaflops—that’s more than 16 quadrillion calculations a second—55 percent faster than Japan’s K Computer, which is ranked No. 2, and more than five times faster than China’s Tianhe-1A, which surprised the world by taking the top spot in 2010. Sequoia’s processing power is roughly equivalent to that of 2 million laptops.

 

What is it used for? The Department of Energy, which runs LLNL, has a mandate to maintain the U.S. nuclear weapons stockpile, so Sequoia’s primary mission is nuclear weapons simulations. But the DOE is also using computers like Sequoia to help U.S. companies do high-speed R&D for complex products such as jet engines and medical research. The goal is to help the country stay competitive in a world where industrial influence matters as much to national security as nukes do.

Brain-Computer Interface – Michael Boninger, Jennifer Collinger, Alan Degenhart, Andrew Schwartz, Elizabeth Tyler-Kabara, Wei Wang, University of Pittsburgh; Tim Hemmes

On the evening of July 11, 2004, Tim Hemmes, a 23-year-old auto-detail-shop owner, tucked his 18-month-old daughter, Jaylei, into bed and roared off for a ride on his new motorcycle. As he pulled away from a stop sign, a deer sprang out. Hemmes swerved, clipped a mailbox, and slammed headfirst into a guardrail. He awoke choking on a ventilator tube, terrified to find he could not lift his arms to scratch his itching nose.

 

Seven years later Hemmes was invited to participate in a University of Pittsburgh research project aimed at decoding the link between thought and movement. Hemmes enthusiastically agreed and last year made history by operating a robotic arm only with his thoughts.

 

The science was adapted from work done by Pitt neurobiologist Andrew Schwartz, who spent nearly three decades exploring the brain’s role in movement in animal trials. In 2008 his research group trained monkeys with brain microchips to feed themselves using a robotic arm controlled by signals from the creatures’ brains. Movement, Schwartz explains, is how we express our thoughts. “The only way I know what’s going on between your ears is because you’ve moved,” he says.

 

To apply this technology to humans, Schwartz teamed up with University of Pittsburgh Medical Center clinician Michael Boninger, physician/engineer Wei Wang, and engineer/surgeon Elizabeth Tyler-Kabara, who attached an electrocorticography (ECoG) grid to Hemmes’s brain surface. Wang then translated the electrical signals generated by Hemmes’s thoughts into computer code. The researchers hooked his implant to a robotic arm developed by the Johns Hopkins University Applied Physics Laboratory (which itself won a 2007 Breakthrough Award). Hemmes moved the robotic arm in three dimensions, giving Wang a slow but determined high-five.

 

The team’s ultimate goal is to embed sensors in the robotic arm that can send signals back to the brain, allowing subjects to “feel” whether an object the arm touches is hot, cold, soft, hard, heavy, or light. Hemmes has an even more ambitious but scientifically feasible goal. “I want to move my own arms, not just a robotic arm,” he says. If that happens, the first thing he’ll do is hug his daughter.

 

[A] Thought To touch the apple, the patient imagines a simple action, such as flexing a thumb, to move the arm in a single direction.

 

[B] Signal Pickup A postage-stamp-size implant picks up electrical activity generated by the thought and sends the signals to a computer.

 

[C] Interpretation A computer program parses signals from the chip and, once it picks up on specific activity patterns, sends movement data to the arm.

 

[D] Action The patient can move the arm in any direction by combining multiple thoughts—flexing a thumb while bending an elbow—guiding the arm toward the apple.

CORNAR CameraRamesh Raskar and Andreas Velten, MIT Media Lab

Through two centuries of technological change, one limitation of photography has remained constant: A camera can only capture images in its line of sight. But now a team of university researchers led by MIT Media Lab professor Ramesh Raskar has built a camera that sees around corners. The CORNAR system bounces high-speed laser pulses off any opaque surface, such as a door or wall. These pulses then reflect off the subject and bounce back to a camera that records incoming light in picosecond intervals. The system measures and triangulates distance based on this time-of-flight data, creating a point cloud that visually represents the objects in the other room. Essentially the camera measures and interprets reflected echoes of light.

 

“For many people, being able to see around corners has been this science-fiction dream scenario,” says longtime New York University computer science professor Ken Perlin, who was not involved in the research. “Well, dream or nightmare, depending on how people use it.”

 

[A] Laser A beam is reflected off the door, scattering light into the room.

 

[B] Camera Light reflects off subject and bounces back to camera, which records time-of-flight data.

 

[C] Computer Algorithms create a composite image from camera data.

 

 

(Contributed by Kenneth Hines, CCC Program Associate)

Computer Science Projects Among Popular Mechanics’ Breakthrough Awardees

Comments are closed.