How NSF and Amazon Are Collectively Tackling Artificial Intelligence-Based Bias

How NSF and Amazon Are Collectively Tackling Artificial Intelligence-Based Bias

The National Science Foundation and Amazon teamed up to fund a second round of research projects aimed at promoting trustworthy artificial intelligence and mitigating bias in systems. 


The latest cohort selected to participate in the Program on Fairness in AI include multi-university projects to confront structural bias in hiring, algorithms to help ensure fair AI use in medicine, principles to guide how humans interact with AI systems, and others that focus on education, criminal justice and human services applications. 


“With increasingly widespread deployments, AI has a huge impact on people’s lives,” Henry Kautz, NSF division director for Information and Intelligent Systems, said. “As such, it is important to ensure AI systems are designed to avoid adverse biases and make certain that all people are treated fairly and have equal opportunity to positively benefit from its power.”


Kautz, whose division oversees the program, briefed Nextgov on the complexities that accompany addressing fairness in AI—and the joint initiative NSF and Amazon are backing to help contribute to the creation of more trustworthy technological systems. 


What is “fair”?


AI is already an invisible variable that touches many, crucial aspects of Americans’ lives. Uses range from helping facial recognition unlock smartphones to making recommendations about punishments judges should impose for criminal convictions. But there’s still no universal guarantee that the rapidly evolving technology won't be harmful to certain people.


“It is important to note that we are still trying to understand fairness,” Kautz explained. “And once we have a better understanding of the m ..

Support the originator by clicking the read the rest link below.