In 2019, the National Science Foundation (NSF) and Amazon announced a collaboration to accelerate research on fairness in AI, with each organization committing up to $10 million each in grants over the ensuing three years.
Last year, NSF announced the first 10 projects to receive grants through the initiative. Thirty-five researchers obtained funds for the projects that addressed four broad research areas:
- Ensuring fairness in algorithms and the systems that incorporate them — which begins with the definition and quantification of fairness;
- Accountability and transparency in AI algorithms;
- Using AI to promote equity in society; and
- Ensuring that the benefits of AI are available to everyone.
This year, NSF has announced the next cohort of 37 researchers focused on 11 projects that cover a range of topics, including:
- Theoretical and algoithmic foundations;
- Principles for human interaction with AI systems;
- Technologies such as natural language understanding and computer vision; and
- Applications including hiring decision, education, criminal justice, and human services.
“We are excited to see NSF select an incredibly talented group of researchers whose research efforts are informed by a multiplicity of perspectives,” said Prem Natarajan, Alexa AI vice president of Natural Understanding. “As AI technologies become more prevalent in our daily lives, AI fairness is an increasingly important area of scientific endeavor. And we are delighted to partner with NSF to accelerate progress in this area by supporting the work of the top research teams in the world.”
“NSF is partnering with Amazon to support this year’s cohort of fairness in AI projects,” said Henry Kautz, director of NSF’s Division of Information and Intelligent Systems. “Understanding how AI systems can be designed on principles of fairness, transparency and trustworthiness will advance the boundaries of AI applications. And it will help us build a more equitable society in which all citizens can be designers of these technologies as well as benefit from them.”
More information about this Fairness in AI
program is available on NSF's website, and via their program update. Below is the list of the 2021 awardees, and an overview of their projects.- Fairness in machine learning with human in the loop
"This project aims to understand the long-term impact of fair decisions made by automated machine learning algorithms via establishing an analytical, algorithmic, and experimental framework that captures the sequential learning and decision process, the actions and dynamics of the underlying user population, and its welfare."
- Principal investigator: Yang Liu
- Co-principal investigators: Mingyan Liu, Parinaz Naghizadeh Ardabili, Ming Yin
- Organization: University of California Santa Cruz
- Award amount: $625,000
- End-to-end fairness for algorithm-in-the-loop decision-making in the public sector
"The goal of this project is to develop methods and tools that assist public sector organizations with fair and equitable policy interventions. In areas such as housing and criminal justice, critical decisions that impact lives, families, and communities are made by a variety of actors, including city officials, police, and court judges..."
- Principal investigator: Daniel Neill
- Co-principal investigators: Constantine Kontokosta, Ravi Shroff, Edward McFowland
- Organization: New York University
- Award amount: $625,000
- Foundations of fair AI in medicine: ensuring the fair use of patient attributes
"Currently deployed machine learning models in medicine may exhibit fair use violations that undermine health outcomes. This project mitigates fair use violations at key stages in the deployment of machine learning in medicine: verification, model development, and communication..."
- Principal investigator: Flavio Calmon
- Co-principal investigators: Elena Glassman, Berk Ustun
- Organization: Harvard University
- Award amount: $625,000
- Organizing crowd audits to detect bias in machine learning
"This project will explore three major research questions. The first is investigating new techniques for recruiting and incentivizing participation from a diverse crowd. The second is developing new and effective forms of guidance for crowd workers for finding instances and generalizing instances of bias. The third is designing new ways of synthesizing findings from the crowd so that development teams can understand and productively act on..."
- Principal investigator: Jason Hong
- Co-principal investigators: Motahhare Eslami,
Ken Holstein, Adam Perer, Nihar Shah - Organization: Carnegie-Mellon University
- Award amount: $625,000
- Using machine learning to address structural bias in personnel selection
"Today, personnel selection practitioners in the United States are primarily guided by two streams of knowledge: 1) the development on the legal front pertaining to employment opportunities, and 2) the accumulation of findings in social, behavioral, and economic sciences that guide the accepted professional practices in personnel selection... This research project focuses on bridging the gap to establish machine learning as the third pillar for the design of personnel selection systems in human resource management..."
- Principal investigator: Nan Zhang
- Co-principal investigators: Heng Xu, Mo Wang
- Organization: American University
- Award amount: $624,485
Project description
- Towards adaptive and interactive post hoc explanations
"This proposal has three key areas of focus. First, this proposal will develop a novel formal framework for generating adaptive explanations which can be customized to account for subgroups of interest and user profiles. Second, this proposal will facilitate the explanations as an interactive communication process by dynamically incorporating user inputs. Finally, this proposal will improve existing automatic evaluation metrics such as sufficiency and comprehensiveness, and develop novel ones, especially for the understudied global explanations..."
- Principal investigator: Chenhao Tan
- Co-principal investigators: Yuxin Chen, Himabindu Lakkaraju, Sameer Singh
- Organization: University of Chicago
- Award amount: $375,000
- Using AI to increase fairness by improving access to justice
"This project applies Artificial Intelligence (AI) to increase social fairness by improving public access to justice. Although many AI tools are already available to law firms and legal departments, these tools do not typically reach members of the public and legal service practitioners except through expensive commercial paywalls. The research team will develop two tools to make legal sources more understandable: Statutory Term Interpretation Support (STATIS) and Case Argument Summarization (CASUM)..."
- Principal investigator: Kevin Ashley
- Co-principal investigators: Diane Litman
- Organization: University of Pittsburgh
- Award amount: $375,000
- Fair AI in public policy - achieving fair societal outcomes in ML applications to education, criminal justice, and health and human services
"This project advances the potential for Machine Learning (ML) to serve the social good by improving understanding of how to apply ML methods to high-stakes, real-world settings in fair and responsible ways..."
- Principal investigator: Hoda Heidari
- Co-principal investigators: Olexandra Chouldechova, Rayid Ghani, Zachary Lipton, Christopher Rodolfa
- Organization: Carnegie-Mellon University
- Award amount: $375,000
- Towards holistic bias mitigation in computer vision systems
"With the increasing use of artificial intelligence (AI) systems in life-changing decisions, such as hiring or firing of individuals or the length of jail sentences, there has been an increasing concern about the fairness of these systems. There is a need to guarantee that AI systems are not biased against segments of the population. This project aims to mitigate AI bias in the domain of computer vision, a driving application for much of the recent advances in a popular form of AI known as deep learning..."
- Principal investigator: Nuno Vasconcelos
- Organization: University of California San Diego
- Award amount: $375,000
- Project description
- Measuring and mitigating biases in generic image representation
"This project will provide a study of societal biases present in current methods and models for computational visual recognition that are widely used as a source of generic visual representations..."
- Principal investigator: Vicente Ordonez
- Co-principal investigators: Baishakhi Ray
- Organization: University of Virginia
- Award amount: $375,000
- Quantifying and mitigating disparities in language technologies
"In this work we ask a simple question: can we measure the extent to which the diversity of language that we use affects the quality of results that we can expect from language technology systems? This will allow for the development and deployment of fair accuracy measures for a variety of tasks regarding language technology, encouraging advances in the state of the art in these technologies to focus on all, not just a select few..."
- Principal investigator: Graham Neubig
- Co-principal investigators: Jeffrey Bigham, Yulia Tsvetkov, Geoff Kaufman, Antonios Anastasopoulos
- Organization: Carnegie-Mellon University
- Award amount: $375,000
National Science Foundation, in collaboration with Amazon, awards 11 Fairness in AI grant projects
Program supports computational research with goal of creating trustworthy AI systems that can address some of society's grand challenges.
Research areas