Don't Fall for It! Defending Against Deepfakes

Don't Fall for It! Defending Against Deepfakes
Detecting doctored media has become tricky -- and risky -- business. Here's how organizations can better protect themselves from fake video, audio, and other forms of content.

(Image: Elena via Adobe Stock)



The idea that artificial intelligence AI can help create video, audio, and other media that can't be easily separated from "real" media is the stuff of dystopian science-fiction and film-makers' dreams. But that's what deepfakes are all about. Pundits and security analysts have spent hundreds of thousands of words worrying about the dangers deepfakes pose to democracy, but what about the dangers they pose to the enterprise?


"The concern that I would have for the enterprise is that the sophistication of existing deepfake technologies are certainly beyond most humans' threshold for being tricked by fake imagery," says Jennifer Fernick, chief researcher for the NCC Group.


Images and words that go beyond the human recognition threshold can be used for purposes as "prosaic" as very effective spear-phishing campaigns, she says. It's also a growing problem because the deepfake technology is getting better while our ability to detect deepfakes is not.


"The current machine-based defenses don't solve all of our problems," she explains.


As an example of how difficult the deepfake problem is to solve, Fernick points to last year's Kaggle Data Science Competition called the Deepfake Detection Challenge. With more than 2,200 teams participating and, according to Fernick, approximately 35,000 detection models submitted, the best model could detect a deepfake less than t ..

Support the originator by clicking the read the rest link below.