Study Proves AI Can Encourage Dishonesty | Avast

Study Proves AI Can Encourage Dishonesty | Avast
Avast Security News Team, 19 February 2021

Plus, Parler is back in action and the NCSC warns software developers of supply chain attacks



A new study published by researchers at the University of Amsterdam, Max Planck Institute, Otto Beisheim School of Management, and the University of Cologne revealed that AI-generated advice could corrupt people’s morals, even when they knew the advice is coming from a machine.
The experiment was intended to see if AI could successfully spread misinformation and disinformation in a way that would affect a person’s actions. Researchers recruited more than 1,500 volunteers for the study. The volunteers were given either “honesty-promoting” advice or “dishonesty-promoting” advice, some written by humans and some written by AI, and then tasked with an activity that allowed room for lying. The statistical result was that the AI-generated advice was indistinguishable from the human-written advice, and the volunteers generally chose the dishonest path. This led the researchers to conclude that bad actors could use AI as a force to corrupt a victim’s morals. 
Should we be holding machines to higher standards than what we expect from ourselves? A panel of experts discussed the issue at an Avast virtual conference. Read their opinions in our post about tackling bias in AI algorithms. In a related story, chess grandmaster and AI authority Garry Kasparov discusses the privacy concerns surrounding the fact that AI never forgets data. Avast Security Evangelist Luis Corrons agrees that AI ..

Support the originator by clicking the read the rest link below.