Tackling Bias In AI | Avast

Tackling Bias In AI | Avast
David Strom, 19 October 2020

Should we hold machines at higher standards than those of humans?



An interesting group from various disciplines came together to discuss AI bias at Avast’s CyberSec&AI Connected virtual conference this month. The event showcased leading academics and tech professionals from around the world to examine critical issues around AI for privacy and cybersecurity.
The panel session was moderated by venture capitalist Samir Kumar, who is the managing director of Microsoft’s internal venture fund M12 and included:
Noel Sharkey, a retired professor at the University of Sheffield (UK) who is actively involved in various AI ventures,
Celeste Fralick, the Chief Data Scientist at McAfee and an AI researcher,
Sandra Wachter, an associate professor at the University of Oxford (UK) and a legal scholar, and
Rajarshi Gupta, a VP at Avast and head of their AI and Network Security practice areas.

The group explored first the nature of AI bias, which can be defined in various ways. First off, said Sharkey, is “algorithmic injustice,” where there are clear violations of human dignity. He offered up examples ranging from enhanced airport security, which supposedly picks random people for additional scrutiny, to predictive policing.
Part of the problem for AI is that bias isn’t such a simple parameter. Indeed, according to Fralick, there are two major categories of bias: societal and technological, “and the two feed on each other to set the context among commonly accepted societal mores,” she said during the presentation. “And these mores evolve over time too.” Part of evaluating these mores has to ..

Support the originator by clicking the read the rest link below.