Using Adversarial Machine Learning, Researchers Look to Foil Facial Recognition

Using Adversarial Machine Learning, Researchers Look to Foil Facial Recognition
For privacy-seeking users, good news: Computer scientists are finding more ways to thwart facial and image recognition. But there's also bad news: Gains will likely be short-lived.

Suspect identification using massive databases of facial images. Reputational attacks through deep fakes videos. Security access using the face as a biometric. Facial recognition is quickly becoming a disruptive technology with few limits imposed by privacy policy. 


Academic researchers, however, have found ways to — at least temporarily — cause problems for certain classes of facial-recognition algorithms, taking advantages of weaknesses in the training algorithm or the resultant recognition model. Last week, a team of computer-science researchers at the National University of Singapore (NUS) published a technique that locates the areas of an image where changes can best disrupt image-recognition algorithms, but where those changes are least noticeable to humans.


The technique is general in that it can be used to develop an attack against other machine-learning (ML) algorithms, but the researchers only developed a specific instance, says Mohan Kankanhalli, a professor in the NUS Department of Computer Science and co-author of a paper on the adversarial attack.


"Currently, we need to know the class [of algorithm] and can develop a solution for that," he says. "We are working on its generalization, to have one solution that works for every class, current and future. However, that is nontrivial and hence we anticipate it will take time."


The research raises the possibility of creating photos that people can easily perceive but that foils commonly used facial-recognition algorithms. Turned into a filter, for example, the technique could allow users to add imperceptible changes to photos to make ..

Support the originator by clicking the read the rest link below.