The following is a special contribution to this blog from Robin Murphy, Raytheon Professor of Computer Science and Engineering and director of the Center for Robot-Assisted Search and Rescue at Texas A&M University. Back in April, Robin co-organized a visioning workshop about the role of computing in disaster management (including preparedness, prevention, response, and recovery). Here Robin summarizes the workshop, as well as the final report — Computing for Disasters: A Report from the Community Workshop — that the Computing Community Consortium (CCC) is releasing today.
What would it take to reach a point when the unimaginable could be predicted, handled, and coordinated so that it no longer constituted an emergency? What would it take to reach a point when there were no stories about ongoing human suffering, impact on GDPs, and environmental devastation following a disaster? What would it take to reach a point when schoolchildren learned about science at the same time that they learned how to prepare for a hurricane or earthquake?
It would take computing.
Over the last few months, Trevor Darrell and I have had the privilege of co-chairing a Workshop on Computing for Disaster Management, jointly sponsored by the National Science Foundation (NSF) and Computing Community Consortium (CCC), and synthesizing the findings into a summary report that the CCC is releasing today [more after the jump].
Together with our steering committee, we convened a group of 45 participants, some invited, some selected in response to an open call for participation. Our ‘native’ research spanned computer science broadly — communications, social media, social science, sensors, visualization, human-computer interaction, artificial intelligence, robotics, high-performance computing, structural engineering, data mining, information retrieval, machine learning, geospatial databases, computing for economics, and game theory — but we all had some experience in disaster research, from rescue robots at Fukushima to using computer vision to help reunite injured children with their parents.
The primary goal was to formalize what we individually were seeing that made computing research for disasters unique. We each saw that disasters were more than an application area, yet required significant understanding of the larger socio-technical system in order to conduct research with broad impacts.
Our process focused on creating a common ground between participants. We mixed in grounding case studies by practitioners such as Marc Haffer from CALFIRE and Matt Minson, MD, medical director of Texas Task Force One with short technical spotlights and breakouts. Perhaps the most moving moment was during our after hours tour of the Digital Emergency Operations Center at the headquarters of the American Red Cross, where one of the workers asked us the haunting question, “What would it take for there to be no more emergencies?”
Our report captures one vision of what it would take: a robust, multi-disciplinary community in which researchers partner with practitioners to tackle fundamental new research in socio-technical systems that enable decision making for extreme scales under extreme conditions. The report describes what ‘Computing for Disasters’ is, why it is different from existing research areas and paradigms, what its benefits to society and to science are, and what a broad investment portfolio and living roadmap facilitating the engagement of researchers from all disciplines might entail.
We hope you will read this report and join us as we tackle ‘computing at the extremes’! And please share your thoughts in the space below.