Early last Saturday morning, I had the privilege and pleasure of organizing and moderating a symposium at the American Association for the Advancement of Science’s (AAAS) 2012 Annual Meeting in Vancouver. The 90-minute session — titled Data to Knowledge to Action: Computational Science in a Global Knowledge Society — sought to describe how advances in computing research are enabling a “data to knowledge to action” pipeline that is increasingly critical for facilitating a 21st-century global knowledge society. Over 70 people packed into a small room in the Vancouver Convention Center to hear the session’s featured speakers, Eric Horvitz, Peter Stone, and Deborah Estrin (slide shows after the jump).
Archive for the ‘conference reports’ category
Last week, the World Economic Forum’s Global Agenda Council on Emerging Technologies released a consensus list — the result of input from “some of the world’s leading minds within the entire [Global Agenda Council] network” — of “the top 10 emerging technologies for 2012.” These are the technologies that have the greatest potential to create new industries and impact new ones by providing solutions to global challenges. Atop the list — which is ordered starting with the technology with the greatest potential — is “informatics for adding value to information”:
The quantity of information now available to individuals and organizations is unprecedented in human history, and the rate of information generation continues to grow exponentially. Yet, the sheer volume of information is in danger of creating more noise than value, and as a result limiting its effective use. Innovations in how information is organized, mined and processed hold the key to filtering out the noise and using the growing wealth of global information to address emerging challenges.
And a number of the other technologies on the list require advances in computing (following the link):
Modern technological advances have sparked many concerns that supercomputers, robots and other sophisticated machinery will soon erase the need for skilled workers, especially in industries like manufacturing and construction, perhaps driving the nation’s unemployment rate even higher in the years ahead.
Similarly, Americans’ increasing dependence on technology, ranging from constant computer use to around-the-clock interaction with mobile phones, has prompted many observers and academics to question whether the line separating people and technology is blurring in an all too dangerous manner.
On Monday, Google Chairman Eric Schmidt offered words to mollify those concerns [after the jump].
Over 3,600 officials spanning government, industry, and academia are gathered at the third annual mHealth Summit just outside Washington, DC, this week, “to advance collaboration in the use of wireless technology to improve health outcomes in the U.S. and abroad.”
Secretary of Health and Human Services Kathleen Sebelius kicked off the conference on Monday morning, emphasizing the game-changing aspects of mobile health technology to improve clinical outcomes, promote preventative medicine, and reduce wasteful spending and healthcare costs. Sebelius noted that mobile healthcare technology is gaining added significance — and issued a call to arms to support innovation in mobile medical devices.
“This is an incredible time to be having this conversation,” she said. “[The federal government] can play a critical role as a catalyst.”
And timed to coincide with the mHealth Summit, two NIH officials — Wendy Nilsen, a Health Science Administrator in the Office of Behavioral and Social Sciences Research (OBSSR), and William Riley, a Program Director at the National Heart, Lung, and Blood Institute — are out with an article describing NIH’s efforts in mHealth (after the jump):
Last week in Seattle a record attendance of more than 11,000 people from throughout the world met at the Seattle Convention Center for SC11 — the largest international supercomputing conference focusing on high performance computing, networking, storage and analysis through a large industrial and research exhibition and a highly peer reviewed technical program (which was attended by almost 5,000 people this year).
We blogged about brain-computer interfaces early last week — and it turns out there was a related talk later in the week by Gerwin Schalk, a Research Scientist at the Wadsworth Center, during MIT’s 2011 Emerging Technologies Conference. Schalk described his lab’s pioneering methods for controlling computers with thoughts instead of fingers:
[In 1968], Doug Engelbart actually showed for the first time how it is possible to use a mouse, a graphical interface, and networked computers to … augment human function. The idea of course was to offload some of the … clerical tasks that we used to perform as humans to a computer that [could] hopefully do these things much faster…
So the vision of Doug Engelbart and his contemporaries — or even people before him — which was nicely expressed by J.C.R. Licklider, who wrote … more than 50 years ago, is that, ‘The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today…”
Now, of course, you know that this vision that J.C.R. Licklider articulated 50 years ago truly has come to be a reality. We can now go on Google and we can type in a keyword, [and] Google goes out and has terabits of information processing speed and terahertz of information processing power, and comes back on 0.23 seconds and tells us what the answer to our query is…
Now that, however, has sort of brought up another problem which I used to call the communications problem… Our brain is an information processing machine that does a lot of things in parallel. It doesn’t execute one particular algorithm very quickly, but it executes a lot of algorithms all at once. Now in contrast, [in] the computer… you have one particular algorithm — typically one of a few algorithms — that are executed very, very quickly. So both of these devices — the brain and the computer — in their own right are extremely, extremely powerful. [But] the path that connects these two is a very, very small pipe. In fact, in terms of information transfer rate, if we transmit information to the outside by spoken language or by typing for example, we cannot communicate more than about 40 to 50 bits per second. Period. That’s the maximum speed of our motor system in communicating with the outside. Now that’s sort of pathetic given the fact that the brain is pretty powerful and that computers of course can transmit hundreds of gigabytes per second, and so forth — and then you have about 50 bits per second that relate the two…
So just like Doug Engelbart and contemporaries, I, too, have a vision. And my vision is that, wouldn’t it be nice if we could tap directly into the brain to get access to this rich semantic representation and communicate … directly between the brain and the computer?
See the answer as part of Schalk’s presentation after the jump…