How a new wave of deepfake-driven cybercrime targets businesses


As deepfake attacks on businesses dominate news headlines, detection experts are gathering valuable insights into how these attacks came into being and the vulnerabilities they exploit.


Between 2023 and 2024, frequent phishing and social engineering campaigns led to account hijacking and theft of assets and data, identity theft, and reputational damage to businesses across industries.


Call centers of major banks and financial institutions are now overwhelmed by an onslaught of deepfake calls using voice cloning technology in efforts to break into customer accounts and initiate fraudulent transactions. Internal help desks and staff have likewise been inundated with social engineering campaigns via calls and messages, often successfully, as was the case in the attack on internal software developer company Retool, which led to tens of millions of dollars in losses for the company’s clients.  A financial worker was duped into transferring funds to fraudsters. Speaker-based authentication systems are now being finessed and circumvented with deepfake audio.


The barrier to entry for bad actors is lower now than before. Tools allowing the creation of deepfakes are cheaper and more accessible than ever, giving even the users with no technical know-how the chance to engineer sophisticated, AI-fueled fraud campaigns.


Given the increasing proliferation and methods used by cybercriminals, real-time detection that leverages AI to catch AI will be essential in protecting the financial and reputational interests of businesses.


Deepfakes across modalities


A deepfake is a piece of synthetic media—an image, video, audio or text—that appears authentic, but has been made or manipulated with generative AI models.


Deepfake audio refers to synthetically generated sound that has been created or altered using deep learning models. A common method behind ..

Support the originator by clicking the read the rest link below.