Archive for the ‘awards’ category

 

Computer Science Projects Among Popular Mechanics’ Breakthrough Awardees

October 4th, 2012

Popular Mechanics, the American Magazine which features regular articles on science and technology, released their annual breakthrough awardees earlier this week.  These awards highlight innovations that have the potential to make the world smarter, safer and more efficient. A total of ten awards were announced and at least four of the awardees feature computer science research. Four of these projects are featured below, all awardees are listed on Popular Mechanics’ webpage.

Popular Mechanics' Breakthrough Award [credit: Popular Mechanics]

 

MABEL, Teaching Robots to Walk - Jessy Grizzle, University of Michigan, Ann Arbor and Jonathan Hurst, Oregon State University

Walking, that fundamental human activity, seems simple: Take one foot, put it in front of the other; repeat. But to scientists, bipedalism is still largely a mystery, involving a symphony of sensory input (from legs, eyes, and inner ear), voluntary and involuntary neural commands, and the synchronized pumping of muscles hinged by tendons to a frame that must balance in an upright position. That makes building a robot that can stand up and walk in a world built for humans deeply difficult.

 

But it’s not impossible. Robots such as Honda’s ASIMO have been shuffling along on two feet for more than a decade, but the slow, clumsy performance of these machines is a far cry from the human gait. Jessy Grizzle of the University of Michigan, Ann Arbor, and Jonathan Hurst of Oregon State University have created a better bot, a 150-pound two-legged automaton named MABEL that can walk with a surprisingly human dexterity. MABEL is built to walk blindly (without the aid of laser scanners or other optical technologies) and fast (it can run a 9—minute mile). To navigate its environment, MABEL uses contact switches on its “feet” that send sensory feedback to a computer. “When MABEL steps off an 8-inch ledge, as soon as its foot touches the floor, the robot can calculate more quickly and more accurately than a human the exact position of its body,” explains Grizzle. MABEL uses passive dynamics to walk efficiently—storing and releasing energy in fiberglass springs—rather than fighting its own momentum with its electric motors.
The quest for walking robots is not purely academic. The 2011 Fukushima Daiichi nuclear disaster highlighted the need for machines that could operate in hazardous, unpredictable environments that would stop wheeled and even tracked vehicles. Grizzle and Hurst are already working on MABEL’s successor, a lighter, faster model named ATRIAS. But there’s still plenty of engineering to be done before walking robots can be usefully deployed, walking into danger zones with balance and haste but no fear.

IBM Blue Gene/Q Sequoia SupercomputerBruce Goodwin, Michel McCoy, Lawerence Livermore National Laboratory, IBM Research and IBM Systems & Technology Group

What is it? Sequoia, an IBM Blue Gene/Q supercomputer newly installed at Lawrence Livermore National Laboratory (LLNL) in Livermore, Calif. In June it officially became the most powerful supercomputer in the world.
How powerful are we talking about? Sequoia is currently capable of 16.32 petaflops—that’s more than 16 quadrillion calculations a second—55 percent faster than Japan’s K Computer, which is ranked No. 2, and more than five times faster than China’s Tianhe-1A, which surprised the world by taking the top spot in 2010. Sequoia’s processing power is roughly equivalent to that of 2 million laptops.

 

What is it used for? The Department of Energy, which runs LLNL, has a mandate to maintain the U.S. nuclear weapons stockpile, so Sequoia’s primary mission is nuclear weapons simulations. But the DOE is also using computers like Sequoia to help U.S. companies do high-speed R&D for complex products such as jet engines and medical research. The goal is to help the country stay competitive in a world where industrial influence matters as much to national security as nukes do.

Brain-Computer InterfaceMichael Boninger, Jennifer Collinger, Alan Degenhart, Andrew Schwartz, Elizabeth Tyler-Kabara, Wei Wang, University of Pittsburgh; Tim Hemmes

On the evening of July 11, 2004, Tim Hemmes, a 23-year-old auto-detail-shop owner, tucked his 18-month-old daughter, Jaylei, into bed and roared off for a ride on his new motorcycle. As he pulled away from a stop sign, a deer sprang out. Hemmes swerved, clipped a mailbox, and slammed headfirst into a guardrail. He awoke choking on a ventilator tube, terrified to find he could not lift his arms to scratch his itching nose.

 

Seven years later Hemmes was invited to participate in a University of Pittsburgh research project aimed at decoding the link between thought and movement. Hemmes enthusiastically agreed and last year made history by operating a robotic arm only with his thoughts.

 

The science was adapted from work done by Pitt neurobiologist Andrew Schwartz, who spent nearly three decades exploring the brain’s role in movement in animal trials. In 2008 his research group trained monkeys with brain microchips to feed themselves using a robotic arm controlled by signals from the creatures’ brains. Movement, Schwartz explains, is how we express our thoughts. “The only way I know what’s going on between your ears is because you’ve moved,” he says.

 

To apply this technology to humans, Schwartz teamed up with University of Pittsburgh Medical Center clinician Michael Boninger, physician/engineer Wei Wang, and engineer/surgeon Elizabeth Tyler-Kabara, who attached an electrocorticography (ECoG) grid to Hemmes’s brain surface. Wang then translated the electrical signals generated by Hemmes’s thoughts into computer code. The researchers hooked his implant to a robotic arm developed by the Johns Hopkins University Applied Physics Laboratory (which itself won a 2007 Breakthrough Award). Hemmes moved the robotic arm in three dimensions, giving Wang a slow but determined high-five.

 

The team’s ultimate goal is to embed sensors in the robotic arm that can send signals back to the brain, allowing subjects to “feel” whether an object the arm touches is hot, cold, soft, hard, heavy, or light. Hemmes has an even more ambitious but scientifically feasible goal. “I want to move my own arms, not just a robotic arm,” he says. If that happens, the first thing he’ll do is hug his daughter.

 

[A] Thought To touch the apple, the patient imagines a simple action, such as flexing a thumb, to move the arm in a single direction.

 

[B] Signal Pickup A postage-stamp-size implant picks up electrical activity generated by the thought and sends the signals to a computer.

 

[C] Interpretation A computer program parses signals from the chip and, once it picks up on specific activity patterns, sends movement data to the arm.

 

[D] Action The patient can move the arm in any direction by combining multiple thoughts—flexing a thumb while bending an elbow—guiding the arm toward the apple.

CORNAR CameraRamesh Raskar and Andreas Velten, MIT Media Lab

Through two centuries of technological change, one limitation of photography has remained constant: A camera can only capture images in its line of sight. But now a team of university researchers led by MIT Media Lab professor Ramesh Raskar has built a camera that sees around corners. The CORNAR system bounces high-speed laser pulses off any opaque surface, such as a door or wall. These pulses then reflect off the subject and bounce back to a camera that records incoming light in picosecond intervals. The system measures and triangulates distance based on this time-of-flight data, creating a point cloud that visually represents the objects in the other room. Essentially the camera measures and interprets reflected echoes of light.

 

“For many people, being able to see around corners has been this science-fiction dream scenario,” says longtime New York University computer science professor Ken Perlin, who was not involved in the research. “Well, dream or nightmare, depending on how people use it.”

 

[A] Laser A beam is reflected off the door, scattering light into the room.

 

[B] Camera Light reflects off subject and bounces back to camera, which records time-of-flight data.

 

[C] Computer Algorithms create a composite image from camera data.

 

 

(Contributed by Kenneth Hines, CCC Program Associate)

NSF Invests Nearly $15 million in Big Data and New Interagency Challenge Announced

October 3rd, 2012

Today at a briefing on Capitol Hill titled, “Big Data, Bigger Opportunities“, hosted by Tech America, The National Science Foundation (NSF) and the National Institutes Tech America Logo [credit: Tech America]of Health (NIH) announced $15 million in funding for Big Data research. These awards come nearly six months after the Obama Administration released it’s substantial R&D initiative in March of this year.

The initiative committed more than $200 million in new funding through six agencies and departments to improve the nation’s  “ability to extract knowledge and insights from large and complex collections of digital data.”

Subra Suresh, Director of the National Science Foundation:

“I am delighted to provide such a positive progress report just six months after fellow federal agency heads joined the White House in launching the Big Data Initiative,” said NSF Director Subra Suresh. “By funding the fundamental research to enable new types of collaborations–multi-disciplinary teams and communities–and with the start of an exciting competition, today we are realizing plans to advance the foundational science and engineering of Big Data, fortifying U.S. competitiveness for decades to come.”

Suzanne Iacono, Senior Advisor at the National Science Foundation, announced the eight award recipients, listed below:

BIGDATA: Mid-Scale: DCM: A Formal Foundation for Big Data Management
University of Washington, Dan Suciu
This project explores the foundations of big data management with the ultimate goal of significantly improving the productivity in Big Data analytics by accelerating data exploration. It will develop open source software to express and optimize ad hoc data analytics. The results of this project will make it easier for domain experts to conduct complex data analysis on Big Data and on large computer clusters.

 

BIGDATA: Mid-Scale: DA: Analytical Approaches to Massive Data Computation with Applications to Genomics
Brown University, Eli Upfal
The goal of this project is to design and test mathematically well-founded algorithmic and statistical techniques for analyzing large scale, heterogeneous and so called noisy data. This project is motivated by the challenges in analyzing molecular biology data. The work will be tested on extensive cancer genome data, contributing to better health and new health information technologies, areas of national priority.

 

BIGDATA: Mid-Scale: DA: Distribution-based Machine Learning for High-dimensional Datasets
Carnegie Mellon University, Aarti Singh
The project aims to develop new statistical and algorithmic approaches to natural generalizations of a class of standard machine learning problems.  The resulting novel machine learning approaches are expected to benefit other scientific fields in which data points can be naturally modeled by sets of distributions, such as physics, psychology, economics, epidemiology, medicine and social network-analysis.

 

BIGDATA: Mid-Scale: DA: Collaborative Research: Genomes Galore – Core Techniques, Libraries, and Domain Specific Languages for High-Throughput DNA Sequencing
Iowa State University, Srinivas Aluru
Stanford University, Oyekunie Olukotun
Virginia Polytechnic University, Wuchun Feng
The goal of the project is to develop core techniques and software libraries to enable scalable, efficient, high-performance computing solutions for high-throughput DNA sequencing, also known as next-generation sequencing. The research will be conducted in the context of challenging problems in human genetics and metagenomics, in collaboration with domain specialists.

 

BIGDATA: Mid-Scale: DA: Collaborative Research: Big Tensor Mining: Theory, Scalable Algorithms and Applications
Carnegie Mellon University, Christos Faloutsos
University of Minnesota, Twin Cities, Nikolaos Sidiropoulos
The objective of this project is to develop theory and algorithms to tackle the complexity of language processing, and to develop methods that approximate how the human brain works in processing language. The research also promises better algorithms for search engines, new approaches to understanding brain activity, and better recommendation systems for retailers.

 

BIGDATA: Mid-Scale: ESCE: Collaborative Research: Discovery and Social Analytics for Large-Scale Scientific Literature
Rutgers University, Paul Kantor
Cornell University, Thorsten Joachims
Princeton University, David Biei
This project will focus on the problem of bringing massive amounts of data down to the human scale by investigating the individual and social patterns that relate to how text repositories are actually accessed and used. It will improve the accuracy and relevance of complex scientific literature searches.

 

The event also featured a new challenge announcement on Big Data science and engineering. The challenge is an interagency collaboration between the NSF, National Aeronautics and Space Administration (NASA) and the Department of Energy (DOE). More information on the challenge from the NSF press release:

The competition will be run by the NASA Tournament Lab (NTL), a collaboration between Harvard University and TopCoder, a competitive community of digital creators.

The NTL platform and process allows U.S. government agencies to conduct high risk/high reward challenges in an open and transparent environment with predictable cost, measurable outcomes-based results and the potential to move quickly into unanticipated directions and new areas of software technology. Registration is open through Oct. 13, 2012, for the first of four idea generation competitions in the series. Full competition details and registration information is available at the Ideation Challenge Phase website.

Read the entire press release on the National Science Foundation’s website.
(Contributed by Kenneth Hines, CCC Program Associate)

Mozilla and NSF Announce First Round of Winners for Brainstorming Phase of Ignite Challenge

September 26th, 2012

As we’ve previously blogged, Mozilla and the National Science Foundation (NSF) have teamed up for a challenge, called “Mozilla Ignite“, which focuses on the development of apps for faster, smarter internet of the future. Apps were designed to address needs in advanced manufacturing, education and workforce technologies, emergency preparedness and public safety, healthcare technologies and clean energy and transportation. The brainstorming round completed on August 23rd which brought in over 300 ideas from the community. Mozilla announced a total of eight winners, with one grand prize award.Mozilla Ignite [credit: Mozilla Foundation]

The grand prize winner from the brainstorming round went to Jeremy Cooperstock, director of the Shared Reality Lab at McGill University in Canada. Here is the description of his idea, titled “Real-time emergency response observation and supervision“, from the Mozilla Ignite blog:

This app saves lives. The goal: arm firefighters, rescue workers and first-responders with powerful new real-time data and communications, combining live, high-quality video from multiple feeds with real-time sensor data — like heat and smoke levels — plus massive computing capacity to improve decision-making and coordination.

Read the about the other winning apps on the Mozilla Ignite Blog.

The next round of the challenge is the development round, in which $85,000 will be awarded. Developers can enter the next round to help build upon one of the winners, or submit a new idea.

(Contributed by Kenneth Hines, CCC Program Associate)

NSF Awards $50 million for Cybersecure Research Projects

September 25th, 2012

Today, the National Science Foundation (NSF) awarded $50 million for research projects designed to help build a secure cyber society and protect the US infrastructure. The awards come from the Secure and Trustworthy Cyberspace Program (SaTC), which “builds on [NSF's] long-term support for a wide range of cutting edge interdisciplinary research and education activities to secure critical infrastructure that is vulnerable to a wide range of threats that challenge its security.”

U.S. National Science Foundation (NSF) [image courtesy NSF].

The funded projects include two frontier awards — The first award, titled “Beyond Technical Security: Developing an Empirical Basis for Socio-Economic Perspectives“, is a multi-institution collaboration between Stefan Savage, University of California, San Diego, Vern Paxson, International Computer Science Institute and Damon McCoy, George Mason University. The second frontier award, titled “Privacy Tools for Sharing Research Data“, goes to Salil Vadhan and his team of multi-disciplinary researchers at Harvard University.

Read more from the NSF press release below.

» Read more: NSF Awards $50 million for Cybersecure Research Projects

Vipin Kumar to Receive 2012 ACM SIGKDD Innovation Award

August 6th, 2012

Vipin Kumar to receive the 2012 ACM SIGKDD Innovation Award [image courtesy ACM SIGKDD].Vipin Kumar, the William Norris Professor and head of computer science and engineering at the University of Minnesota, will receive the Association for Computing Machinery’s (ACM) Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) 2012 Innovation Award at the opening plenary of the 18th international ACM SIGKDD Conference next Sunday in Beijing, China. Since 2000, the annual award has been “conferred on one individual or one group of collaborators whose outstanding technical innovations in the KDD field have had a lasting impact on advancing the theory and practice of the field.”

According to SIGKDD, the citation for Vipin’s award reads as follows (after the jump):

» Read more: Vipin Kumar to Receive 2012 ACM SIGKDD Innovation Award

Judea Pearl’s Turing Award Lecture at AAAI-12

August 2nd, 2012

Douglas Fisher, Vanderbilt UniversityJudea Pearl received the 2011 ACM A. M. Turing Award “for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.” In this guest post, Douglas Fisher, associate professor of computer science and computer engineering at Vanderbilt, summarizes Pearl’s Turing Award Lecture, delivered at last week’s AAAI Conference.

Judea Pearl, University of California at Los Angeles [image courtesy ACM]Professor Pearl delivered his Turing Award Lecture as the opening invited address at the 26th AAAI Conference in Toronto, Canada, last week. He opened by acknowledging the support of the AAAI community in a great collaborative enterprise, a remarkable “journey” as he said, and he shared the award with the community and his coauthors. He also cited three of his seminal papers, which had been presented at past AAAI conferences and that presaged the hierarchy of processes — probabilistic, causal, and counterfactual — that formed a trajectory of his research and a focus of his talk: “Reverend Bayes on Inference Engines: A Distributed Hierarchical Approach” from the second annual AAAI conference; and “Symbolic Causal Networks” with Adnan Darwiche and “Probabilistic Evaluation of Counterfactual Queries” with Alexander Balke, both from the 12th annual AAAI conference (more following the link…).

» Read more: Judea Pearl’s Turing Award Lecture at AAAI-12