Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.

Data, Algorithms, and Fairness Panel

January 11th, 2018 / in Announcements, research horizons, Research News, resources / by Helen Wright

Data, Algorithms, and Fairness Panel

Contributions to this post were provided by CCC Council member Nadya Bliss, Solon Barocas, Nick Diakopoulos, and Kelly Jin.

Every few weeks we have been highlighting different panels from the Computing Community Consortium (CCC) Symposium on Computing Research: Addressing National Priorities and Societal Needs. This week we are looking at the Data, Algorithms, and Fairness panel.

This panel looked at how data-driven and algorithmic decision-making increasingly determines how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, and threats to social justice. As covered on the panel, there has been an increased focus on efforts to mitigate algorithmic bias and increase algorithmic transparency.

The Data, Algorithms, and Fairness panelists are listed below.

  • Solon Barocas, from Cornell University, talked about fairness in machine learning. There are many potential sources of bias and even well-intentioned actors using machine learning can fall victim to them. Sometimes algorithms discriminate because they were developed using data that were not representative of the entire population; sometimes because the data encode past discriminatory decisions, and sometimes because there isn’t enough information to make reliable predictions about different groups in society. Addressing these problems requires different technical mitigations—and research activity in this area is heating up.
  • Nick Diakopoulos, from Northwestern University, presented on why algorithmic accountability is hard. He noted five dimensions that make it hard including information deficits in what we know or can know about algorithms, their nature as sociotechnical assemblages that diffuse responsibility and create difficult-to-trace feedback loops, temporal instability from random or probabilistic behavior, expectation setting in terms of how algorithms should behave, and legal friction that limits access or chills audit activity. Diakopoulos identified cases where algorithms contribute to diversity issues, such as insurance rates and socioeconomic class.
  • Kelly Jin, from the Laura and John Arnold Foundation, looked at how we can disrupt mass incarceration with data. In our attempts to identify data, how do we deliver better services? How can we implement this on the ground?

Kelly Jin

During the question and answer session with the audience, one audience member asked the panel, what are the biggest mistakes that you see in non-empirically trained people writing about these problems? Jin responded that people writing about these issues who haven’t been trained to work with algorithms would benefit from speaking to individuals in the field and learning from them. Barocas replied non-trained people need to compare bias in the algorithms to what exists in current decision-making, rather than completely discounting computational approaches. Finally, Diakopoulos said that non-trained people need to go up against industry definitions of how things should work. It is important to remember that there are different definitions of fairness besides how the industry might define it.

Stay tuned to the blog next week as we highlight the final panel from the symposium. See the video from this panel session here.

Data, Algorithms, and Fairness Panel