Archive for the ‘big science’ category

 

Discovery Informatics: Science Challenges for Intelligent Systems

September 21st, 2012

Workshop on Discovery Informatics [image courtesy Yolanda Gil and Haym Hirsh, http://www.discoveryinformaticsinitiative.org/diw2012].This past February in Arlington, VA, Yolanda Gil (University of Southern California Information Sciences Institute) and Haym Hirsh (Rutgers University) co-organized a workshop on discovery informatics, assembling over 50 participants from academia, industry, and government “to investigate the opportunities that scientific discoveries present to information sciences and intelligent systems as a new area of research called discovery informatics.” A report summarizing the key themes that emerged during discussions at that workshop is now available.

From the executive summary:

The workshop's participants identified an expansive range of fundamental research challenges for information and intelligent systems brought into focus by these three  themes [image courtesy Yolanda Gil and Haym Hirsh, http://www.isi.edu/~gil/diw2012/DIW2012-ExecSummary.pdf].…[The] workshop’s participants identified an expansive range of fundamental research challenges for information and intelligent systems brought into focus by these three themes:

 

  1. To improve computational discovery processes: We must understand how to make processes explicit, so they can be better managed and easily reapplied. Tools are being developed to automate or assist with specific aspects of these processes. We must develop a methodology to design these tools for usability, learn from what has worked and has not worked, and understand what features lead to broader adoption by scientists. We must reduce the effort needed to integrate different sets of tools to process data. Research must be carried out to facilitate the recording of provenance of scientific processes, making them inspectable and reproducible. We must develop user-centered design and visualization techniques that augment human abilities to analyze complex data with complex processes, and enable understanding and insight.
  2. To strengthen the interplay between models and data. Data is often separated from the models that explain it, hurting our ability to do science effectively. We must increase the expressiveness of model representations and their connections to data. We must map the landscape of different types of models and develop general mechanisms for automated or semi-automated model construction, data collection guided by models, and data analysis. We must design scalable methods to navigate large hypothesis spaces and their validation or disproval that results from data analysis.
  3. To manage human contributions and opening participation in science problems. We must innovate scientific processes by creating effective human-computer teams, where human creativity can complement brute force computation. We must invent new ways to approach scientific questions by opening up science and exposing the possibility of contributions from the broader public. We must develop a science of design for such systems, so we can understand the incentives, norms of behavior, and effective communication of tasks.

 

Advances in these areas will advance the practice of discovery Informatics in two ways: 1) improving existing discovery processes that are inefficient and suffer from human cognitive limitations, and 2) developing new discovery processes that increase our ability to understand challenging scientific phenomena. Further, outcomes in these areas are not domain specific, and can be leveraged across different sciences and engineering disciplines, having multiplicative returns, avoiding the inefficient, redundant development of computing innovations that would otherwise be repeated in specific disciplines (e.g., bio-, geo-, eco-informatics).

Check out the workshop website for the complete report, workshop materials, and other helpful information.

To capitalize on the momentum from the spring workshop, Gil, Hirsh, and others will run a symposium at AAAI this fall — to be held Nov. 2-4, 2012, in Arlington, VA — titled Discovery Informatics: The Role of AI Research in Innovating Scientific Processes.

(Contributed by Erwin Gianchandani, CCC Director)

The 25 “Coolest” Computer Networking Research Projects

September 20th, 2012

Network World is out with a list of the 25 “coolest” computer networking research projects:

University labs, fueled with millions of dollars in funding and some of the biggest brains around, are bursting with new research into computer and networking technologies. Wireless networks, computer security and a general focus on shrinking things and making them faster are among the hottest areas, with some advances already making their way into the market.

Among the projects highlighted (following the link):

» Read more: The 25 “Coolest” Computer Networking Research Projects

New Secure and Trustworthy Computing Solicitation Issued

September 20th, 2012

U.S. National Science Foundation (NSF).The National Science Foundation (NSF) last week issued a new solicitation for its Secure and Trustworthy Computing (SaTC) program:

Cyberspace — a global “virtual” village enabled by hyper-connected digital infrastructures – has transformed the daily lives of people for the better. Families and friends regardless of distance and location can see and talk with one another as if in the same room. Cyber economies create new opportunities. Every sector of the society, every discipline, has been transformed by cyberspace. Today it is no surprise that cyberspace is critical to our national priorities in commerce, education, energy, financial services, healthcare, manufacturing, and defense.

 

The rush to adopt cyberspace, however, has exposed its fragility. The risks of hyper-connectedness have become painfully obvious to all. The privacy of personally identifiable information is often violated on a massive scale by persons unknown. Our competitive advantage is eroded by the exfiltration of significant intellectual property. Law enforcement is hobbled by the difficulty of attribution, national boundaries, and uncertain legal and ethical frameworks. All these concerns now affect the public’s trust of cyberspace and the ability of institutions to fulfill their mission [more following the link].

 

» Read more: New Secure and Trustworthy Computing Solicitation Issued

“Big Data Gets Its Own Photo Album”

September 19th, 2012

From The New York Times‘s Bits Blog:

John Guttag, left, and Collin Stultz developed software that sifts discarded data from heart-monitoring machines looking for signs that patients are at high risk for a second heart attack [image courtesy Jason Grow/The Human Face of Big Data via The New York Times].Rick Smolan, the photographer and impresario of media projects, has tackled all sorts of big subjects over the years, from countries (“A Day in the Life of Australia” in 1981) to drinking water (“Blue Planet Run” in 2007). He typically recruits about 100 photographers for each, and their work is crafted into classy coffee-table books of striking photographs and short essays.

 

But Mr. Smolan concedes that his current venture has been “by far the most challenging project we’ve done.”

 

Small wonder, given his target: Big Data.

 

Massive rivers of digital information are a snooze, visually. Yet that is the narrow, literal-minded view. Mr. Smolan’s new project, “The Human Face of Big Data,” which [was] formally announced [last] Thursday, focuses on how data, smart software, sensors and computing are opening the door to all sorts of new uses in science, business, health, energy and water conservation. And the pictures are mostly of the people doing that work or those being affected [more following the link].

 

» Read more: “Big Data Gets Its Own Photo Album”

From GPS and Virtual Globes to Spatial Computing-2020

September 17th, 2012

CCC's Spatial Computing Visioning Activity: From GPS and Virtual Globes to Spatial Computing-2020.The following is a special contribution to this blog from the organizing committee of the Computing Community Consortium’s (CCC) visioning workshop on spatial computing — From GPS and Virtual Globes to Spatial Computing-2020 — held last Monday and Tuesday in Washington, DC. The committee summarizes some of the highlights of the workshop.

Spatial computing (SC) is a set of ideas and technologies that will transform our lives by understanding the physical world, knowing and communicating our relation to places in that world, and navigating through those places. The transformational potential of spatial computing is already evident. From virtual maps to consumer GPS devices, our society has benefitted immensely from spatial technology. We’ve reached the point at which a hiker in Yellowstone, a schoolgirl in DC, a biker in Minneapolis, and a taxi driver in Manhattan know precisely where they are, nearby points of interest, and how to reach their destinations. Large organizations already use spatial computing for site-selection, asset tracking, facility management, navigation and logistics. Scientists use GPS to track endangered species to better understand behavior, and farmers use GPS for precision agriculture to increase crop yields while reducing costs. Virtual globes (e.g., Google Earth, NASA World Wind) are being used in classrooms to teach children about their neighborhoods and the world in a fun and interactive way. Augmented reality applications (such as Google Goggles) are providing real-time place-labeling in the physical world and providing people detailed information about major landmarks nearby.

This is just the beginning.

» Read more: From GPS and Virtual Globes to Spatial Computing-2020

“Big Data’s Management Revolution”

September 15th, 2012

Erik Brynjolfsson and Andrew McAfee of MIT have posted an interesting entry to the Harvard Business Review Blog about big data and corporate management:

Big Data's Management Revolution [image courtesy Harvard Business Review].Big data has the potential to revolutionize management. Simply put, because of big data, managers can measure, and hence know, radically more about their businesses, and directly translate that knowledge into improved decision making and performance. Of course, companies such as Google and Amazon are already doing this. After all, we expect companies that were born digital to accomplish things that business executives could only dream of a generation ago. But in fact the use of big data has the potential to transform traditional businesses as well.

 

We’ve seen big data used in supply chain management to understand why a carmaker’s defect rates in the field suddenly increased, in customer service to continually scan and intervene in the health care practices of millions of people, in planning and forecasting to better anticipate online sales on the basis of a data set of product characteristics, and so on.

 

Here’s how two companies, both far from Silicon Valley upstarts, used new flows of information to radically improve performance.

 

» Read more: “Big Data’s Management Revolution”