Deepfake Videos: When good tech goes bad

Deepfake Videos: When good tech goes bad

By Ben Lorica, Chief Data Scientist at O’Reilly





More than a decade ago leading UK investigative journalist Nick Davies published Flat Earth News, an exposé of how the mass media had abdicated its responsibility to the truth. Newsroom pressure to publish more stories, faster than their competitors had, Davies argued, led to journalists becoming mere “churnalists”, rushing out articles so fast that they could never check on the truth of what they were reporting.


Shocking as Davies’ revelations seemed back in 2008, they seem pretty tame by today’s standards. We now live in a post-truth world of Fake News and ‘alternative facts’; where activists don’t just seek to manipulate the news agenda with PR but now use advanced technology to fake images and footage. A particularly troubling aspect of these ‘deepfake’ videos is their use of artificial intelligence to fabricate people saying or doing things with almost undetectable accuracy.


The result is that publishers risk running completely erroneous stories – as inaccurate as stating that the world is flat – with little or any ability to check their source material and confirm whether it is genuine. The rise of unchecked fakery has serious implications for our liberal democracy and our ability to understand what’s truly going on in the world. And while technology has an important role in defeating deepfake videos, we all have a responsibility to change the way we engage with the ‘facts’ we encounter online.


Faking the news


The technology to manipulate imagery has come a long way since Stalin had people deepfake videos