The dark side of AI: Large-scale scam campaigns made possible by generative AI

The dark side of AI: Large-scale scam campaigns made possible by generative AI

Generative artificial intelligence technologies such as OpenAI’s ChatGPT and DALL-E have created a great deal of disruption across much of our digital lives. Creating credible text, images and even audio, these AI tools can be used for both good and ill. That includes their application in the cybersecurity space.


While Sophos AI has been working on ways to integrate generative AI into cybersecurity tools — work that is now being integrated into how we defend customers’ networks — we’ve also seen adversaries experimenting with generative AI. As we’ve discussed in several recent posts, generative AI has been used by scammers as an assistant to overcome language barriers between scammers and their targets generating responses to text messages as an assistant to overcome language barriers between scammers and their targets, generating responses to text messages in conversations on WhatsApp and other platforms. We have also seen the use of generative AI to create fake “selfie” images sent in these conversations, and there has been some use reported of generative AI voice synthesis in phone scams.


When pulled together, these types of tools can be used by scammers and other cybercriminals at a larger scale. To be able to better defend against this weaponization of generative AI, the Sophos AI team conducted an experiment to see what was in the realm of the possible.


As we presented at DEF CON’s AI Village earlier this year (and at CAMLIS in October and BSides Sydney in November), our experiment delved into the potential misuse of advanced generative AI technologies to orchestrate large-scale scam campaigns. These campaigns fuse multiple types of generative AI, ..

Support the originator by clicking the read the rest link below.