Army Researchers Working to Protect Facial Recognition Software from Hacks

Army Researchers Working to Protect Facial Recognition Software from Hacks

Duke University researchers and the Army are working on a way to protect the military's artificial intelligence systems from cyberattacks, according to a recent Army news release.


The Army Research Office is investing in more security as the Army increasingly uses AI systems to identify threats. So one of the goals of the NYU-sponsored CSAW HackML competition in 2019 was to develop a software that would prevent cyberattackers from hacking into the facial and object recognition software the military uses to train its AI.

"Object recognition is a key component of future intelligent systems, and the Army must safeguard these systems from cyberattacks," MaryAnne Fields, program manager for the ARO's intelligent systems, said in a statement. "This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the object recognition system is subtly altered to give incorrect answers."


Related: Army Looking at AI-Controlled Weapons to Counter Enemy Fire


She added that creating this safeguard would let future soldiers have confidence their AI systems are properly identifying a person of interest or a dangerous object.


The hackers could create a trigger, like a hat or flower, to corrupt images being used to train the AI system, the news release said. The system would then learn incorrect labels and create models that make the wrong predictions of what an image contains.


So Duke University researchers Yukun Yang and Ximing Qiao, both of whom won first prize in the HackML competition, created a program that can flag and find potential triggers.


"To identify a backdoor trigger, you must essentially find out three unknown variables: which class the trigger was injected into, ..

Support the originator by clicking the read the rest link below.