Deepfake Detection Poses Problematic Technology Race

Deepfake Detection Poses Problematic Technology Race
Experts hold out little hope for a robust technical solution in the long term.

With disinformation concerns increasing as the US presidential election approaches, industry and academic researchers continue to investigate ways of detecting misleading or fake content generated using deep neural networks, so-called "deepfakes."


While there have been successes — for example, focusing on artifacts such as the unnatural blinking of eyes has resulted in high accuracy rates — a key problem in the arms race between attackers and defenders remains: The neural networks used to create deepfake videos are automatically tested against a variety of techniques intended to detect manipulated media, and the latest defensive detection technologies can easily be included. The feedback loop used to create deepfakes is similar in approach — if not in technology — to the fully undetectable (FUD) services that allow malware to be automatically scrambled in a way to dodge signature-based detection technology.


Detecting artifacts is ultimately a losing proposition, says Yisroel Mirsky, a post-doctoral fellow in cybersecurity at the Georgia Institute of Technology and co-author of a paper that surveyed the current state of deepfake creation and detection technologies.


"The defensive side is all doing the same thing," he says. "They are either looking for some sort of artifact that is specific to the deepfake generator or applying some generic classifier for some architecture or another. We need to look at solutions that are out of band."


The problem is well known among researchers. Take Microsoft's Sept. 1 announcement of a tool designed to help detect deepfake videos. The Microsoft Video Authenticator detects possible deepfakes by finding the boundary between inserted images and the original video, providing a score for the video as it plays.


While th ..

Support the originator by clicking the read the rest link below.