Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Thoughts on the NSCAI’s First Quarter Recommendations from the Computing Community Consortium’s FADE Task Force

June 29th, 2020 / in Announcements, CCC / by Helen Wright

Contributions to this post were provided by members of the Computing Community Consortium (CCC) Fairness, Accountability, Disinformation, and Explainability (FADE) Task Force

The National Security Commision on Artificial Intelligence (NSCAI) published a report outlining their recommendations to address challenges arising out of the expanding AI landscape – most notably the awareness of an AI competition amongst the world powers and the need for the United States to win this AI race. The recommendations spell out what the NSCAI believes will accelerate U.S performance in AI and strengthen our national security and economy. The following is a look at some key points of the recommendation report from the perspective of the CCC’s Fairness, Accountability, Disinformation and Explainability (FADE) Task Force. 

CCC’s FADE Task Force explores the overlapping areas of fairness, accountability, disinformation, and explainability within algorithms, big data, and the Internet.

An overarching theme of interest to us is the role of ethics and responsibility in the use of AI. The report highlights (in Recommendation 6) specific aspects of the process of implementing ethical and responsible AI. However, in our view, the report could have gone farther in emphasizing the key role of  fundamental research into ethical and responsible AI. This interdisciplinary research is both new and key to developing trustworthy  AI that will  reshape our society in beneficial ways while also minimizing potential for misuse. 

With this in mind, we have specific comments on the recommendations in the report. 

Recommendations to Increase AI R&D Investments

The recommendation of increasing research and design investments in AI deserves emphasis and commendation. We appreciate the six areas of research that the report highlights in its Recommendation 2. We feel though that a seventh area is equally important: To wit, 

Trustworthy AI: to advance the multi-disciplinary understanding of issues of fairness, accountability, transparency, privacy and other dimensions of the interaction of AI systems and society. 

Recommendation 3: Strengthen AI Workforce

One of the biggest issues the computing community faces is fostering a bigger workforce. The NSCAI’s recommendations focus on modeling government recruiting processes after private sector practices in hiring AI experts. The NSCAI’s recommendations will expand opportunities for AI experts within the government. However, we encourage them to look beyond current practices. There are substantial inequities within the community of computing professionals, with extensively documented gaps in employment for under-represented groups. Reforms in government hiring processes should also focus on ways to broaden opportunities in computing and AI to a much bigger swath of the population. There are three reasons why focused efforts to improve diversity in our computing workforce will benefit the country. Firstly, we will harness vast untapped potential and expertise and help us meet the ever growing demand for computing professionals. Secondly, diversity in our workforce builds strength and resilience through multiple viewpoints and naturally reflects the diversity in our society – a strength of the United States. Thirdly, multiple perspectives from all across society help to ensure the systems we build minimize opportunities for exploitation by hostile agents. 

Recommendation 6: Advance Ethical and Responsible AI

The NSCAI describes potential processes for the procurement and handling of data, models, and AI systems. With the fast pace of innovation in technology, mechanisms for auditing and validating technology must evolve as quickly as the underlying technology itself rather than being a static process. In addition, reliable processes for establishing the provenance of data used to train models and the validity of this data (for example being representative and free of various kinds of biases) are crucial: in AI even more so than in other settings, a system is only as good as the data supplied to it for training and evaluation. More research is needed (as described above) to develop the theory and methodologies  that will ensure accountable and transparent  AI systems. For example, the CCC has conducted a series of workshops on assured autonomy for autonomous systems and a report will be forthcoming.

Thoughts on the NSCAI’s First Quarter Recommendations from the Computing Community Consortium’s FADE Task Force

Comments are closed.