ChatGPT Spearphishing: Social Engineering at Scale

ChatGPT Spearphishing: Social Engineering at Scale

Modern-day SDRs (sales development reps) perform acts of phishing for a living. Today’s business culture, especially in technology sales, accepts this as how business gets done. They do lead generation to identify their target company, cadence messaging to engage and interact with an individual at the target company, and finally, deliver the ‘payload’ — often in the form of a calendar invite, a pdf spec sheet, or possibly a link to a product download.


An acquaintance of mine on LinkedIn recently inquired if anyone knew of a SaaS offering that was leveraging a Large Language Model (LLM) based AI to do lead generation, handle the cadence messaging, and set up delivery of the ‘payload.’ In the comments were several recommendations for such services with varying levels of maturity.

It bears repeating; this is phishing at its phinest and is perfectly legal!


ChatGPT is conversational AI leveraging vast amounts of training on linguistic data in order to perform a realistic discussion on a topic. This technology, still in its nascent form, is already quite useful. I have co-written blogs with it, students are co-writing term papers (ahem), and even my dad used it to help write some flowery poetry about a certain politician he doesn’t agree with. The usefulness is undeniable. Today it is quite expensive to run, but all such technologies will become dramatically cheaper with time, either by efficiency gains, less sophistication, or novel breakthroughs.


The security implications are profound and easy to imagine; it seems just a touch of paranoia is added to any discussion about conversational or generative AI.

A Phony Phish with ChatGPT + LinkedIn


Say you’re an employee at a corporation getting a LinkedIn messa ..

Support the originator by clicking the read the rest link below.