What to know about new generative AI tools for criminals


Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions.


The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged.


Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and produce phishing emails. However, using mainstream LLMs proved difficult because the major LLMs from OpenAI, Microsoft and Google have guardrails to prevent their use for scams and criminality.


As a result, a range of AI tools designed specifically for malicious cyberattacks have begun to emerge.


WormGPT: A smart tool for threat actors


Chatter about and promotion of LLM chatbots optimized for cyberattacks emerged on Dark Web forums in early July and, later, on the Telegram messaging service. The tools are being offered to would-be attackers, often on a subscription basis. They’re similar to popular LLMs but without guardrails and trained on data selected to enable attacks.


The leading brand in AI tools leveraging generative AI is called WormGPT. It’s an AI module based on the GPTJ language model, developed in 2021, and is already being used in business email compromise (BEC) attacks and for other nefarious uses.


Users can simply type instructions for the creation of fraud emails — for example, “Write an email coming from a bank that’s designed to trick the recipient into giving up their login credentials.”


The tool then produces a unique, sometimes clever and usu ..

Support the originator by clicking the read the rest link below.