How to Protect Against Deepfake Attacks and Extortion


Cybersecurity professionals are already losing sleep over data breaches and how to best protect their employers from attacks. Now they have another nightmare to stress over — how to spot a deepfake. 


Deepfakes are different because attackers can easily use data and images as a weapon. And those using deepfake technology can be someone from inside the organization as well as outside.


How Attackers Use Deepfake Attacks


Earlier in 2021, the FBI released a warning about the rising threat of synthetic content, which includes deepfakes, describing it as “the broad spectrum of generated or manipulated digital content, which includes images, video, audio and text.” People can create the most simple types of synthetic content with software like Photoshop. Deepfake attackers are becoming more sophisticated using technologies like artificial intelligence (AI) and machine learning (ML). Now, these can create realistic images and videos.


Remember, attackers are in the cyber theft business to make money. Ransomware tends to be successful. So, it was a logical move for them to use deepfakes as a new ransomware tool. In the traditional way of sharing ransomware, attackers launch a phishing attack with malware embedded in an enticing deepfake video. There’s also the new way to leverage deepfakes. Attackers can show people or businesses in all sorts of illicit (but fake) behaviors that could damage their reputation if the images went public. Pay the ransom, and the videos stay private. 


Besides ransomware, synthetic content is used in other ways. Threat actors might weaponize data and images to spread lies and scam employees, clients and others, or to extort them.


Attackers might use ..

Support the originator by clicking the read the rest link below.