Now Social Engineering Attackers Have AI. Do You? 


Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code. 


The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else. 


How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks? 


When Every Social Engineering Attack Uses Perfect English


ChatGPT is a public tool based on a language model created by the San Francisco-based company, OpenAI. It uses machine learning to analyze human language so that it can respond with often uncanny ability.


Intuitively, it’s clear how malicious actors who are marginal speakers of English could use ChatGPT to craft flawless English emails to trick your employees. In fact, it’s already happening.


In the past, if someone received a poorly worded, grammatically incorrect email claiming to be from the bank, it could be quickly identified and easily dismissed. Cybersecurity awareness training drove home this point — if an email sounds shady, odd, incomplete or erroneous, it’s probably not from the source claimed. 


The rise of ChatGPT means cyber attackers with limited English skills can quickly create convincing messages in flawless English.  


Off the ChatGPT Guardrails


The creators of OpenAI have built some guardrails into ChatGPT to prevent its abuse. But these are easily overcome. Especially for social engineering. A malicious actor can simply ask ChatGPT to write a scam email, then send that note with the malicious link or request attached.


I asked ChatG ..

Support the originator by clicking the read the rest link below.