Government Report Warns of AI Policing Bias

Government Report Warns of AI Policing Bias

A new government-backed report has warned that the growing use of automation and machine learning algorithms in policing could be amplifying bias, in the absence of consistent guidelines.





Commissioned by the Centre for Data Ethics and Innovation (CDEI), which sits in the Culture Department, the report from noted think tank the Royal United Services Institute (RUSI) will lead to formal recommendations in March 2020.





It’s based on interviews with civil society organizations, academics, legal experts and police themselves, many of whom are already trialing technology such as controversial AI-powered facial recognition.





The report claimed that use of such tools, and those used in predictive crime mapping and individual risk assessments, can actually amplify discrimination if they’re based on flawed data containing bias.





This could include over-policing of certain areas and a greater frequency of stop and search targeting the black community.





It also warned that the emerging technology is currently being used without any clear over-arching guidance or transparency, meaning key processes for scrutiny, regulation and enforcement are missing.





RUSI claimed that police forces need to carefully consider how algorithmic bias may result in them policing certain areas more heavily, and warned against over-reliance on technology which could reduce the role of case-by-case discretion. It also said that discrimination cases could be brought by individuals unfairly “scored” by algorithms.





“Interviews conducted to date evidence a desire for clearer national guidance and leadership in the area of data analytics, and widespread recognition and apprec ..

Support the originator by clicking the read the rest link below.