Archive for the ‘big science’ category

 

NIST Global City Teams Challenge Report

October 20th, 2014

smart america global city teams

The National Institute of Standards and Technology (NIST) launched their Global City Teams Challenge with a great deal of energy and enthusiasm last month.  The workshop ended with more than a dozen presentations by potential Global City Team Challenge teams and provided an opportunity for interested parties to discuss Internet-of-Things deployments in a smart city environment.

From the workshop report:

The US Ignite website now contains materials related to 20+ potential Global City Teams projects or Action Clusters.

  1. If you would like to learn more about one of the listed projects or if you are interested in becoming associated with one of the projects, please email sokwoo.rhee@nist.gov and william.maguire@us-ignite.org.
  2. If you have an existing or new smart city project that you would like to add to conduct under the auspices of the Global City Teams Challenge, please email sokwoo.rhee@nist.gov and william.maguire@us-ignite.org.
  3. If you have contributed to a project on the list, but have not yet been contacted by a team leader, please email adam.martin@us-ignite.org.

If you are interested in the Challenge, and were not able to attend the kick-off workshop, there is a webinar scheduled for Wednesday, October 22 at 10:00am (US Eastern Time). Please use this link for the upcoming webinar: https://global.gotomeeting.com/join/832557933. To call into the webinar, please use phone: 1 (408) 650-3131 and passcode: 832557933#. The webinar will be no more than 1-hour and will include status updates from NIST and Q&A session.

For more information, see the Global City Team Challenge website and the Smart America Global City Team website.

 

Accelerating the Big Data Innovation Ecosystem

September 4th, 2014

NSF

In March 2012, the Obama Administration announced the “Big Data Research and Development Initiative.” The goal is to help solve some of the Nation’s most pressing challenges by improving our ability to extract knowledge from large and complex collections of digital data. The Administration encouraged multiple stakeholders including federal agencies, private industry, academia, state and local government, non-profits, and foundations, to develop and participate in Big Data innovation projects across the country.

National Science Foundation is exploring the establishment of a national network of “Big Data Regional Innovation Hubs.” These Hubs will help to sustain new regional and grassroots partnerships around Big Data. Potential roles for Hubs include, but are not limited to:

  • Accelerate the ideation and development Big Data solutions to specific global and societal challenges by convening stakeholders across sectors to partner in results-driven programs and projects.
  • Act as a matchmaker between the various academic, industry, and community stakeholders to help drive successful pilot programs for emerging Big Data technology.
  • Coordinate across multiple regions of the country, based on shared interests and industry sector engagement to enable dialogue and share best practices.
  • Aim to increase the speed and volume of technology transfer between universities, public and private research centers and laboratories, large enterprises, and SMB’s.
  • Facilitate engagement with opinion and thought leaders on the societal impact of Big Data technologies as to maximize positive outcomes of adoption while reducing unwanted consequences.
  • Support the education and training of the entire Big Data workforce, from data scientists to managers to data end-users.

The National Science Foundation (NSF) seeks input from stakeholders across academia, state and local government, industry, and non-profits across all parts of the Big Data innovation ecosystem on the formation of Big Data Regional Innovation Hubs. Please submit a response of no more than two-pages to BIGDATA@nsf.gov outlining:

  1. The goals of interest for a Big Data Regional Hub, with metrics for evaluating the success or failure of the Hub to meet that goal;
  2. The multiple stakeholders that would participate in the Hub and their respective roles and responsibilities;
  3. Plans for initial and long-term financial and in-kind resources that the stakeholders would need to commit to this hub; and
  4. A principal point of contact.

Please submit responses no later than Nov 1, 2014. For more information see the NSF announcement.

 

Computing a Cure for HIV

June 27th, 2014

UIUCOn June 26, the National Science Foundation (NSF) released a Discovery article titled Computing a Cure for HIV, written by Aaron Dubrow, Public Affairs Specialist in the Office of Legislative & Public Affairs.  The article provides an overview of the disease and how it continues to afflict millions of people worldwide.

Over the past decade, scientists have been using the power of supercomputers “to better understand how the HIV virus interacts with the cells it infects, to discover or design new drugs that can attack the virus at its weak spots and even to use genetic information about the exact variants of the virus to develop patient-specific treatments.”

Here are 9 projects that are using supercomputing and computational power to help fight the disease:

  1. Modeling HIV: from atoms to actions
  2. Discovery of hidden pocket in HIV protein leads to ideas for new inhibitors
  3. Preventing HIV from reaching its mature state
  4. Crowdsourcing a cure
  5. Virtual screening of HIV inhibitors
  6. Membrane effects
  7. Computing patient-specific treatment methods
  8. Preparing the next generation to continue the fight
  9. A boy and the BEAST

You can read more about these projects in the full article here.

Recent ISAT/DARPA Workshop Targeted Approximate Computing

June 23rd, 2014

The following is a special contribution to this blog by by CCC Executive Council Member Mark Hill and workshop organizers Luis Ceze, Associate Professor in the Department of Computer Science and Engineering at the University of Washington, and James Larus, Full Professor and Head of the School of Computer and Communication Sciences at the Ecole Polytechnique Federale de Lausanne

ApproximateLuis Ceze and Jim Larus organized a DARPA ISAT workshop on Approximate Computing in February, 2014. The goal was to discuss how to obtain 10-100x performance and similar improvements in MIPS/watt out of future hardware by carefully trading off accuracy of a com
putation for these other goals. The focus was not the underlying technology shifts, but rather the likely radical shifts required in hardware, software and basic computing systems properties to pervasively embrace accuracy trade-offs.

Below we provide more-detailed motivation for approximate computing, while the publicly-released slides are available here.

Given the end of Moore’s Law performance improvements and imminent end of Dennard scaling, it is imperative to find new ways to improve performance and energy efficiency of computer systems, so as to permit large and more complex problems to be tackled with constrained power envelopes, package sizes, and budgets. One promising approach is approximate computing, which relaxes the traditional digital orientation of precisely stated and verified algorithms reproducibly and correctly executed on hardware, in favor of approximate algorithms that produce “sufficiently” correct answers. The sufficiency criteria can either be a probabilistic one that results are usually correct, or it can be a more complex correctness criteria that the most “significant” bits of an answer are correct.

Approximation introduces another degree of freedom that can be used to improve computer system performance and power efficiency. For example, at one end of the spectrum of possible approximations, one can imagine computers whose circuit implementations employ aggressive voltage and timing optimizations that might introduce occasional non-deterministic errors. At another end of the spectrum, one can use analog computing techniques in select parts of the computation. One can also imagine entirely new ways of “executing” programs that are inherently approximate, e.g., what if we used neural networks to carry out “general” computations like browsing the web, running simulations, or doing search, sorting, and compression of data? Approximation opportunities go beyond just computation, since we can also imagine ways of storing data approximately that leads to potential retrieval errors, but is much denser, faster and energy efficient. Relaxing data communication is another possibility, since almost all forms of communication  (on-chip, off-chip, wireless, etc) use resources to guarantee data integrity, which is often unnecessary from the application point of view.

Obviously approximation is not a new idea, as it has been used in many areas such as lossy compression and numeric computation. However, these applications of the ideas were implemented in specific algorithms, which ran as part of a large system on a conventional processor. Much of the benefit of approximation may accrue from taking a broader systems perspective, for example by relaxing storage requirements for “approximate data”. But there has been little contemplation of what an approximate computer system would look like. What happens to the rest of the system when the processor evolves to support approximate computation? What is a programming model for approximate computation? What will programming languages and tools that directly support approximate computation look like? How do we prove approximate programs “correct”? Is there a composability model for approximate computing? How do we debug them? What will the system stack that supports approximate computing look like? How do we handle backward compatibility?

DARPA Officially Launches Robotics Grand Challenge – Watch Pet-Proto Robot in Action

October 24th, 2012

Today, the Defense Advanced Research Projects Agency (DARPA) officially kicked off its newest Grand Challenge, DARPA Robotics Challenge (DRC). As Boston Dynamics robot [credit Boston Dynamics]we’ve blogged previously, the Grand Challenge calls for “a humanoid robot (with a bias toward bipedal designs) that can be used in rough terrain and for industrial disasters.” DARPA also released a video of Pet-Proto, a humanoid robot manufactured by Boston Dynamics. Pet-Proto, a predecessor to DARPA’s Atlas robot, is an example of what the agency envisions for the challenge.

Watch Pet-Proto in action, as it navigates obstacles:

 

More about the challenge from DARPA:

The Department of Defense’s strategic plan calls for the Joint Force to conduct humanitarian, disaster relief and related operations.  The plan identifies requirements to extend aid to victims of natural or man-made disasters and conduct evacuation operations.  Some disasters, however, due to grave risks to the health and wellbeing of rescue and aid workers, prove too great in scale or scope for timely and effective human response.  The DARPA Robotics Challenge (DRC) will attempt to address this capability gap by promoting innovation in robotic technology for disaster-response operations.

 

The primary technical goal of the DRC is to develop ground robots capable of executing complex tasks in dangerous, degraded, human-engineered environments.  Competitors in the DRC are expected to focus on robots that can use standard tools and equipment commonly available in human environments, ranging from hand tools to vehicles, with an emphasis on adaptability to tools with diverse specifications.

 

To achieve its goal, the DRC aims to advance the current state of the art in the enabling technologies of supervised autonomy in perception and decision-making, mounted and dismounted mobility, dexterity, strength, and platform endurance.  Success with supervised autonomy, in particular, could allow control of robots by non-expert operators, lower the operator’s workload, and allow effective operation even with low-fidelity (low bandwidth, high latency, intermittent) communications.

 

The DRC consists of both robotics hardware and software development tasks and is structured to increase the diversity of innovative solutions by encouraging participation from around the world, including universities, small, medium and large businesses, and even individuals and groups with ideas on how to advance the field of robotics.  Detailed descriptions of the participant tracks are available in the DRC Broad Agency Announcement.

 

A secondary goal of the DRC is to make software and hardware development for ground-robot systems more accessible to interested contributors, thereby lowering the cost of acquisition while increasing capabilities.  DARPA seeks to accomplish this by creating and providing government-furnished equipment (GFE) to some DRC participants in the form of a robotic hardware platform with arms, legs, torso and head.  Availability of this platform will allow teams without hardware expertise or hardware to participate.  Additionally, all teams will have access to a government-furnished simulator created by DARPA and populated with models of robots, robot components and field environments.  The simulator will be an open-source, real-time, operator-interactive virtual test bed, and the accuracy of the models used in it will be rigorously validated on a physical test bed.  DARPA hopes the creation of a widely available, validated, affordable, and community supported and enhanced virtual test environment will play a catalytic role in development of robotics technology, allowing new hardware and software designs to be evaluated without the need for physical prototyping.

 

The DRC Broad Agency Announcement was released on April 10, 2012.

 

The DRC kicked off on October 24, 2012, and is scheduled to run for approximately 27 months with three planned competitions, one virtual followed by two live. Events are planned for June 2013, December 2013 and December 2014.

To learn more, check out the DARPA Robotics Challenge page.

(Contributed by Kenneth Hines, CCC Program Associate)

NSF Announces “Exploiting Parallelism and Scalability” (XPS) Program

October 23rd, 2012

This week, the National Science Foundation issued a solicitation for its new Exploiting Parallelism and Scalability (XPS) program. The program aims to support groundbreaking research leading to a new era of scalable computing. NSF estimates that $15 million in awards will be made in FY 2013 for this program. U.S. National Science Foundation (NSF).

As the solicitation notes, the Computing Community Consortium (CCC) furnished a white paper earlier this year titled 21st Century Computer Architecture, through which members of the computing research community contributed strategic thinking in this space.  The white paper drew upon a number of earlier efforts, including CCC’s Advancing Computer Architecture Research (ACAR) visioning reports.

Here is a synopsis of the Exploiting Parallelism and Scalability (XPS) program from the National Science Foundation:

Computing systems have undergone a fundamental transformation from the single-processor devices of the turn of the century to today’s ubiquitous and networked devices and warehouse-scale computing via the cloud. Parallelism has become ubiquitous at many levels. The proliferation of multi- and many-core processors, ever-increasing numbers of interconnected high performance and data intensive edge devices, and the data centers servicing them, is enabling a new set of global applications with large economic and social impact. At the same time, semiconductor technology is facing fundamental physical limits and single processor performance has plateaued. This means that the ability to achieve predictable performance improvements through improved processor technologies has ended.

 

The Exploiting Parallelism and Scalability (XPS) program aims to support groundbreaking research leading to a new era of parallel computing. XPS seeks research re-evaluating, and possibly re-designing, the traditional computer hardware and software stack for today’s heterogeneous parallel and distributed systems and exploring new holistic approaches to parallelism and scalability. Achieving the needed breakthroughs will require a collaborative effort among researchers representing all areas– from the application layer down to the micro-architecture– and will be built on new concepts and new foundational principles. New approaches to achieve scalable performance and usability need new abstract models and algorithms, programming models and languages, hardware architectures, compilers, operating systems and run-time systems, and exploit domain and application-specific knowledge. Research should also focus on energy- and communication-efficiency and on enabling the division of effort between edge devices and clouds.

Proposals should address four focus areas:

Foundational principles (FP)

Research on foundational principles should engender a paradigm shift in the ways in which one conceives, develops, analyzes, and uses parallel algorithms, languages, and concurrency. Foundational research should be guided by crucial design principles and constraints impacting these principles. Topics include, but are not limited to:

 

  • New computational models that free the programmer from many low-level details of specific parallel hardware while supporting the expression of properties of a desired computation that allows maximum parallel performance. Models should be simple enough to understand and use, have solid semantic foundations, and guide algorithm design choices for diverse parallel platforms.
  • Algorithms and algorithmic paradigms that simultaneously allow reasoning about parallel performance, lead to provable performance guarantees, and allow optimizing for various resources, including energy, memory hierarchy, and communication bandwidth as well as parallel work and running time.
  • New programming languages and language mechanisms that support new computational models, raise the level of abstraction, and lower the barrier of entry for parallel and concurrent programming. Parallel and concurrent languages that have programmability, verifiability, and scalable performance as design goals. Of particular interest are languages that abstract away from the traditional imperative programming model found in most sequential programming languages.
  • Compilers and techniques for mapping high-level parallel languages and language mechanisms to efficient low-level, platform-specific code.
  • Development of interfaces to express parallelism at a higher level while being able to express and analyze locality, communication, and other parameters that affect performance and scalability.

Cross-layer and Cross-cutting Approaches (CLCCA)

In order to fully exploit the power of current and emerging computer architectures, research is needed that re-evaluates, and possibly re-designs, the traditional computer hardware and software stack – applications, programming languages, compilers, run-time systems, virtual machine, operating systems and architecture – for today’s heterogeneous parallel systems. A successful approach should be a collaboration that explores new holistic approaches to parallelism and cross-layer design. Topics include, but are not limited to:

 

  • New abstractions, models, and software systems that expose fundamental attributes, such as energy use and communication costs, across all layers and that are portable across different platforms and architectural generations.
  • New software and system architectures that are designed for exploitable locality, with parallelism and communication efficiency to minimize energy use, and using on-chip and chip-to-chip communication achieving low latency, high bandwidth, and power efficiency.
  • New methods and metrics for evaluating, verifying and validating reliability, resilience, performance, and scalability of concurrent, parallel, and heterogeneous systems.
  • Runtime systems to manage parallelism, memory allocation, synchronization, communication, I/O, and energy usage.
  • Extracting general principles that can drive the future generation of computing architectures and tools with a focus on scalability, reliability, robustness, security and verifiability.
  • Exploration of tradeoffs addressing an optimized “separation of concerns.” Which problems should be handled by which layers? What information and abstractions must flow between the layers to achieve optimal performance? Which aspects of system design can be automated and what is the optimal use of costly human ingenuity?

Scalable Distributed Architectures (SDA)

Many emerging applications require a rich environment that enables sensing and computing devices that communicate with each other and with warehouse-scale facilities via the cloud, which in turn processes and supplies information for edge devices, such as smart phones. Research is needed into the components and the programming of such highly parallel and scalable distributed architectures. Topics include, but are not limited to:

 

  • Novel approaches that enable smart sensor design with the constraints of low energy use, tight form factors, tight time constraints and adequate computational capacity, and low cost. Exemplary approaches include using innovative communication modalities and data-specific approximate computing techniques.
  • Runtime platforms and virtualization tools that allow programs to divide effort between and among portable platforms and the cloud while responding dynamically to changes in the reliability and energy efficiency of the cloud uplink. Possible questions to address include: How should computation be distributed between the nodes and cloud infrastructure? How can system architecture help preserve privacy by giving users more control over their data? Should compute engines and memory systems be co-designed?
  • Research that enables conventionally-trained engineers to program warehouse-scale computers, taking advantage of the highly parallel and distributed environment and at the same time being resilient to significant amounts of component and communication failures. Such research may be based on novel hardware support, programming abstractions, new algorithms, storage systems, middleware, operating systems and/or virtualization.

Domain-specific Design

Research is needed on how to exploit domain and application-specific knowledge to improve programmability, reliability, and scalable parallel performance. Topics include, but are not limited to:

 

  • Parallel domain-specific languages that provide both high-level programming models for domain experts and high performance across a range of parallel platforms, such as GPUs, SMPs, and clusters.
  • Program synthesis tools that generate efficient parallel codes from high-level problem descriptions using domain-specific knowledge. Approaches might include optimizations based on mathematical and/or statistical reasoning, auto-vectorization techniques that exploit domain-specific properties, and auto-tuning techniques.
  • Hardware-software co-design for domain-specific applications that pushes performance and energy efficiency while reducing cost, overhead, and inefficiencies.
  • Integrated data management paradigms harnessing parallelism and concurrency, encompassing the entire data path from generation to transmission, to storage, use, security, and maintenance, to eventual archiving or destruction.
  • Work that generalizes the approach of exploiting domain-specific knowledge, such as tools, frameworks, and libraries that support the development of domain-specific solutions to computational problems and are integrated with domain science.
  • Novel approaches suitable for scientific application frameworks addressing domain-specific mapping of parallelism onto a variety of parallel computational models and scales.

Read complete details – including proposal deadlines – here.

(Contributed by Kenneth Hines, CCC Program Associate)