AI vs. human deceit: Unravelling the new age of phishing tactics


Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown.


To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on a critical question: Do the current generative AI models have the same deceptive abilities as the human mind?


Imagine a scenario where AI squares off against humans in a battle of phishing. The objective? To determine which contender can get a higher click rate in a phishing simulation against organizations. As someone who writes phishing emails for a living, I was excited to find out the answer.


With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee. It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.


In this blog, we’ll detail how the AI prompts were created, how the test was conducted and what this means for social engineering attacks today and tomorrow.


Round one: The rise of the machines


In one cor ..

Support the originator by clicking the read the rest link below.