Researchers Trick Facial-Recognition Systems

Researchers Trick Facial-Recognition Systems
Goal was to see if computer-generated images that look like one person would get classified as another person.

Neural networks powered by recent advances in artificial intelligence and machine learning technologies increasingly have become adept at generating photo-realistic images of human faces completely from scratch.


The systems typically use a dataset comprised of millions of images of real people to "learn" over a period of time how to autonomously generate original images of their own.


At the Black Hat USA 2020 virtual event last week, researchers from McAfee showed how they were able to use such technologies to successfully trick a facial-recognition system into misclassifying one individual as an entirely different person. As an example, the researchers showed how at an airport an individual on a no-fly list could trick a facial-recognition system used for passport verification into identifying him as another person.


"The basic goal here was to determine if we could create a fake image, using machine learning models, which looked like one person to the human eye, but simultaneously classified as another person to a facial recognition system," says Steve Povolny, head of advanced threat research at McAfee.


To do that, the researchers built a machine-learning model and fed it training data: a set of 1,500 photos of two separate individuals. The images were captured from live video and sought to accurately represent valid passport photos of the two people.


The model then continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning model eventually got to a point where it was generating images that looked like a valid passport photo of one of the individuals: even as the facial recognition system ..

Support the originator by clicking the read the rest link below.