Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CCC Responds to New York Times Article- Society Needs Computer Science (and Math and Social Sciences) Now More Than Ever

November 28th, 2017 / in Announcements, CCC, research horizons, Research News / by Helen Wright

The following blog post was drafted by CCC Chair Beth Mynatt, CCC Exec Member Ben Zorn, and CCC Council Members Elizabeth Bradley, Sampath Kannan, and Cynthia Dwork.

Beth Mynatt, CCC Chair, recently submitted the following letter to the Editor of the New York Times:

In her November 14th Op-Ed, Cathy O’Neil makes the case that technology is impacting people’s lives at an accelerating pace and that computer scientists have been “asleep at the wheel” in dealing with emerging challenges.

Computing research advances have had sweeping societal effects, but not without problems (e.g. racial bias in facial recognition). Careful design is critical to heading off “unintended consequences” resulting from one-sided research efforts. O’Neil fails to acknowledge that the computing research community is actively working to prevent the promotion of misinformation and discrimination stemming from algorithmic decision making.

Algorithmic accountability requires mathematical definitions of what is “fair” and “just”, which on the surface sounds impossible. But almost a decade ago, computing researchers developed a mathematical theory called differential privacy, which protects information about individuals when analyzing groups of people, and is now deployed in the commercial space and used by US federal agencies. Significant work on fairness and transparency is underway, but progress will require collaboration and informed communication, between academic researchers, industry, government, and the broader public.

Sincerely,

Elizabeth D. Mynatt, PhD

Chair of the Computing Community Consortium (CCC)

Professor, Georgia Institute of Technology

Here, we expand on our response and outline specific ways that the academic community has had an impact and is actively seeking to increase its impact.

The Problem
The core of Dr. O’Neil’s argument is that technology developments require that the academic community steps up and not only enables these technologies to exist but also allows them to be understood. These concerns exist because data-driven machine learning and AI technologies behave in ways that we do not fully understand and are too often opaque to inspection and that opacity becomes a greater risk the more we rely on them. This risk increases as the potential international sources of powerful corporate and governmental AI technology blend profit and political motives in ways that contrast with individual freedom and human rights. If a company like Volkswagen is tempted to cheat on emissions standards by modifying their traditional software, the possibility that companies or governments might cheat by including biases into their opaque AI technologies should not be ignored.

Other Responses
While we agree with Dr. O’Neil that there is a problem, we disagree that the research community has been ignoring it. Others have pointed out that the academic community, across multiple disciplines, has been engaged in addressing this problem for years. For example, the response of the University of Maryland’s Pervasive Data Ethics team, “We’re Awake- But We’re Not at the Wheel” documents the degree to which these challenges have already been of interest across the social sciences and concludes that greater emphasis is needed on collaborative research, education, and informed public policy. In a posting to Medium with numerous co-authors, “Awake on the Autobahn: Academics, algorithms and accountability”, University of Utah professor Suresh Venkatasubramanian, echoes this response, including pointing out academic communities that are actively working on this problem. Beyond these responses, we point out that there is a large groundswell of work in this area, much of which has had a substantial and ongoing impact.

The Success of Differential Privacy
At the core of any algorithmic accountability is a mathematical definition of what is “fair” and “just”, which on the surface sounds impossible. But almost a decade ago, CS researchers asked similar questions about privacy and developed a mathematical theory called differential privacy that expresses precisely how to measure and control what information can be learned when an individual’s information is grouped with others. Differential privacy, once just a mathematical concept, is now deployed to protect customer privacy by companies including Apple and Google. The US Census Bureau is cooperating with differential privacy researchers to incorporate these techniques in the upcoming 2020 Census. While differential privacy is relatively mature, early efforts in defining fairness mathematically and ensuring that machine learning and AI systems demonstrate fairness are underway.

Algorithmic Fairness
A large class of scenarios where we want algorithms to be fair are scenarios where people are classified into 2 or more categories. Does an individual get a loan or not? Is she admitted to a particular college or not? These are examples of classification problems. In these exampl, s there are only 2 categories, and one of them is evidently more desirable than the other. Emerging fairness goals generally break down into two types – group fairness and individual fairness. In group fairness, the goal is to be fair to a protected group based on race, gender, or other constitutionally protected class of people. The simplest type of group fairness goal is demographic parity, which requires that the same fraction of people from the protected group are classified in the desirable category as in the general population. Demographic parity is too crude a measure in some circumstances. A more refined criterion gives the well-known notion of equality of opportunity a quantitative definition. It says that qualified people in the protected group should be rejected at the same rate as qualified people in the general population. Of course this begs the question of how we identify qualified people, but there are ways of identifying such people from machine learning on more data. Current research examines whether existing classification procedures achieve equality of opportunity, and if not, how they can be modified to do so.

To illustrate the notion of individual fairness, consider a hospital that sees a steady stream of patients, each with their particular medical history. The hospital has a variety of treatment options. For each patient, the hospital has its current best estimate of how efficacious each treatment will be (and this will depend on the patient’s data). The individually fair thing to do would be to give each person the best treatment possible with our current knowledge. However, the cause of science, and future generations might be better served by the hospital trying out treatments in order to acquire more accurate knowledge about their effectiveness on different kinds of patients. Initial research has shown that these two laudable goals do not have to be at odds if there is sufficient noise in the patient data. One can mathematically prove that we can do right by each patient, but we will nevertheless learn about the characteristics of each treatment at a very good rate.

Ongoing Efforts at the CRA and CCC
In addition to specific projects as mentioned above, the Computing Community Consortium (CCC), has been working with the community to help inform federal policymakers about the important issues Dr. O’Neil brings up. In October of this year, the CCC held their second symposium on Computing Research: Addressing National Priorities and Societal Needs, to illuminate current and future trends in computing and the potential for computing to address national challenges. One of the four sessions at the symposium was Data, Algorithms, and Fairness. This panel discussed topics ranging from algorithmic accountability to mass incarceration. In addition, the CCC has a task force focused on Privacy and Fairness. This task force is in the process of organizing a series of visioning workshops for Spring, 2018, with the first one focusing on Fair Learning Paradigms.

Call to Action
More work needs to be done to support and inform the public on privacy and fairness. Progress will require collaboration and informed communications between academic researchers, industry, and government. If you are interested in learning more about the work that the CCC is doing and the Privacy and Fairness Task Force please see our website.

CCC Responds to New York Times Article- Society Needs Computer Science (and Math and Social Sciences) Now More Than Ever

Comments are closed.