Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.

Big Data: A “Transformative New Currency” for Science

May 2nd, 2012 / in big science, policy, research horizons / by Erwin Gianchandani

TechAmerica [image courtesy].Calling data “a transformative new currency for science, engineering, education, and commerce,” National Science Foundation (NSF) Assistant Director for Computer and Information Science and Engineering (CISE) Farnam Jahanian kicked off a briefing about ‘Big Data’ on Capitol Hill earlier today. Organized by TechAmerica, the briefing sought to bring together a panel of leaders from government and industry to discuss the opportunities for innovation arising from the collection, storage, analysis, and visualization of large, heterogeneous data sets, all the while taking into consideration the non-trivial security and privacy implications.

Jahanian noted how “Big Data is characterized not only by the enormous volume of data but also by the diversity and heterogeneity of the data and the velocity of its generation,” the result of modern experimental methods, longitudinal observational studies, scientific instruments such as telescopes and particle accelerators, Internet transactions, and the widespread deployment of sensors all around us. In doing so, he set the stage for why Big Data is important to all facets of the information technology discovery and innovation ecosystem, including the nation’s academic, government, industrial, entrepreneurial, and investment communities (following the link):

Farnam Jahanian, Assistant Director for NSF/CISE“First, insights and more accurate predictions from large and complex collections of data have important implications for the economy. Access to information is transforming traditional businesses and is creating opportunities in new markets. Big Data is driving the creation of new IT products and services based on business intelligence and data analytics, and is boosting the productivity of firms that use it to make better decisions and identify new business trends.


“Second, advances in Big Data are critical to accelerate the pace of discovery in almost every science and engineering discipline. From new insights about protein structure, biomedical research and clinical decision-making, and climate modeling, to new ways to mitigate and respond to natural disasters, and new strategies for effective learning and education — there are enormous opportunities for data-driven discovery.


“Third, Big Data also has the potential to solve some of the nation’s most pressing challenges — in science, education, environment and sustainability, medicine, commerce, and cyber and national security — with enormous societal benefit and laying the foundations for U.S. competitiveness for many decades to come.”

Jahanian shared the President’s Council of Advisors on Science and Technology’s (PCAST) recent recommendation for the Federal government to “increase R&D investments for collecting, storing, preserving, managing, analyzing, and sharing increased quantities of data,” because “the potential to gain new insights [by moving] from data to knowledge to action has tremendous potential to transform all areas of national priority.”

Partly in response to this recommendation, the White House Office of Science and Technology Policy (OSTP) together with other agencies announced a $200 million Big Data R&D Initiative last month to advance core techniques and technologies. According to Jahanian, within this initiative, NSF’s strategy for supporting the fundamental science and underlying infrastructure enabling big data science and engineering involves:

“Advances in foundational techniques and technologies (that is, new methods) to derive knowledge from data;


“Cyberinfrastructure to manage, curate and serve data to science and engineering research and education communities;


“New approaches to education and workforce development; and


“Nurturing new types of collaborations — multi-disciplinary teams and communities enabled by new data access policies — to make advances in the grand challenges of the computation- and data-intensive world today.”

(This strategy is largely captured in the joint solicitation with the National Institutes of Health (NIH) rolled out last month.)

Ultimately, Jahanian said, “Realizing the enormous potential of Big Data requires a long-term, bold, sustainable, and comprehensive approach, not only by NSF, but also throughout the government and our nation’s research institutions.”

The panel discussion that followed echoed many of Jahanian’s opening remarks. For example, Nuala O’Connor Kelly, Senior Counsel for Information Governance and the Chief Privacy Leader at GE, said, “For us, it’s the volume and velocity and variety of data [and the opportunity that’s presented for using] that data to achieve new results for the company and for our customers and clients [throughout the world].” She cited as one example how GE Healthcare collects and monitors maintenance data from its machines deployed worldwide, and can automatically ship replacement parts just days in advance of them malfunctioning based on analytics of machine functionality. “Much of [this] is done remotely and at tremendous cost savings.”

Caron Kogan, Strategic Planning Director at Lockheed Martin, and Flavio Villanustre, Vice President of Technology, LexisNexis Risk Solutions, described similar pursuits within their companies — particularly in intelligence and fraud prevention, respectively.

O’Connor Kelly touched on privacy aspects. “Control may no longer be about not having the data at all,” she pointed out. “A potentially more efficient solution is one of making sure there are appropriate controls technologically and processes and policies and laws in place and then ensuring appropriate enforcement.” She emphasized striking the right balance between policies that ensure the protection of individuals and also enable technological innovation and economic growth.

Bill Perlowitz, Chief Technology Officer within Wyle’s Science, Technology & Engineering Group, referenced a paradigm shift in scientific exploration:

“Before, if you had an application or software, you had value; now that value is going to be in the data. For scientists that represents a shift from [hypothesis-driven] science to data-driven research. Hypothesis-driven science limits your exploration to what you can imagine, and the human mind … can only go so far. Data-driven science allows us to collect data and then see what it tells us, and we don’t have a pretense that we may understand what those relationships are and what we may find. So as a research scientist, these kinds of changes are very exciting and something we’ve been trying to get to for some time now.”

Perhaps Nick Combs, Federal CTO at EMC Corporation, summed it up best when describing the unprecedented growth in data: “It’s [no longer about finding a] needle in a haystack or connecting the dots. That’s child’s play.”

Have thoughts on this morning’s panel — or on the topic of ‘Big Data’ generally? Share them in the comment space below.

(Contributed by Erwin Gianchandani, CCC Director)

Big Data: A “Transformative New Currency” for Science