DARPA Is Taking On the Deepfake Problem

DARPA Is Taking On the Deepfake Problem

The Defense Department is looking to build tools that can quickly detect deepfakes and other manipulated media amid the growing threat of “large-scale, automated disinformation attacks.”


The Defense Advanced Research Projects Agency on Tuesday announced it would host a proposers day for an upcoming initiative focused on curbing the spread of malicious deepfakes, shockingly realistic but forged images, audio and videos generated by artificial intelligence. Under the Semantic Forensics program, or SemaFor, researchers aim to help computers use common sense and logical reasoning to detect manipulated media.


As global adversaries enhance their technological capabilities, deepfakes and other advanced disinformation tactics are becoming a top concern for the national security community. Russia already showed the potential of fake media to sway public opinion during the 2016 election, and as deepfake tools become more advanced and readily available, experts worry bad actors will use the tech to fuel increasingly powerful influence campaigns.


Industry has started developing tech that use statistical methods to determine if a video or image has been manipulated, but existing tools “are quickly becoming insufficient” as manipulation techniques continue to advance, according to DARPA.


“Detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources,” officials said in a post on FedBizOpps. 


However, they added, manipulated media often contains “semantic errors” that existing detection tools often overlook. By teaching computers to catch these mistakes—such as mismatched earrings on a person—researchers can make it harder for digital forgers to fly under the radar. 


Beyond simply de ..

Support the originator by clicking the read the rest link below.