To Fight Deepfakes, Researchers Built a Smarter Camera

To Fight Deepfakes, Researchers Built a Smarter Camera

One of the most difficult things about detecting manipulated photos, or "deepfakes," is that digital photo files aren't coded to be tamper-evident. But researchers from New York University's Tandon School of Engineering are starting to develop strategies that make it easier to tell if a photo has been altered, opening up a potential new front in the war on fakery.


Forensic analysts have been able to identify some digital characteristics they can use to detect meddling, but these indicators don't always paint a reliable picture of whatever digital manipulations a photo has undergone. And many common types of "post-processing," like file compression for uploading and sharing photos online, strip away these clues anyway.


But what if that tamper-resistant seal originated from the camera itself? The NYU team demonstrates that you could adapt the signal processors inside—whether it's a fancy DSLR or a regular smartphone camera—so they essentially place watermarks in each photo's code. The researchers propose training a neural network to power the photo development process that happens inside cameras, so as the sensors are interpreting the light hitting the lens and turning it into a high quality image, the neural network is also trained to mark the file with indelible indicators that can be checked later, if needed, by forensic analysts.



Lily Hay Newman covers information security, digital privacy, and hacking for WIRED.

"People are still not thinking about security—you have to go close to the source where the image is captured," says Nasir Memon, one of the project researchers from NYU Tandon who specia ..

Support the originator by clicking the read the rest link below.