Creating and Weaponizing Deep Fakes | Avast

Creating and Weaponizing Deep Fakes | Avast
David Strom, 13 October 2020

How to identity and detect deep fakes



Professor Hany Farid of UC Berkeley spoke at Avast’s CyberSec&AI Connected virtual conference last week. The event showcased leading academics and tech professionals from around the world to examine critical issues around AI for privacy and cybersecurity.
Farid has spent a lot of his time researching the use and evolution of deep fake videos. It was an intriguing session and demonstrated the lengths that the fake creators will go to make them more realistic and what security researchers will need to do to detect them.
His session started off by taking us through their evolution: What began as innocent and simple photo editing software has evolved into an entire industry that is designed to “pollute the online ecosystem of video information.” The past couple of years has seen advances in more sophisticated image alteration and using AI tools to create these deep fakes. Farid illustrated his point by merging video footage of Hollywood stars Jennifer Lawrence and Steve Buscemi. The resulting clip retained Lawrence’s clothes, body, and hair, but replaced her face with that of Buscemi. Granted, this wasn’t designed to fool anyone, but it was a quite creepy demonstration of how the technology works nonetheless.
Farid categorizes deep fakes into four general types: Non-consensual porn, which is the most frequently found example. One woman’s likeness is pasted into a porn video and distributed online. Misinformation campaigns, designed to deceive and “throw gas on an already lit fire,” he said. Legal evidence tampering, such as demonstrating police misconduct that never actually happene ..

Support the originator by clicking the read the rest link below.