ChatGPT Extensions Could be Exploited to Steal Data and Sensitive Information

API security professionals Salt Security have released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, presenting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and permissions to perform tasks on behalf of users within third party websites. For example, committing code to GitHub repositories or retrieving data from an organisation’s Google Drives. These security flaws introduce a new attack vector and could enable bad actors to gain control of accounts on third party websites and allow access to Personal Identifiable Information (PII) and other sensitive user data stored within third party applications.


ChatGPT plugins extend the model’s abilities, allowing the chatbot to interact with external services. The integration of these third-party plugins significantly enhances ChatGPT’s applicability across various domains, from software development and data management, to educational and business environments. When organisations leverage such plugins, it subsequently gives ChatGPT permission to send an organisation’s sensitive data to a third party website, and allow access to private external accounts. Notably, in November 2023, ChatGPT introduced a new feature, GPTs, an alike concept to plugins. GPTs are custom versions of ChatGPT that any developer can publish, and contain an option called “Action” which connects it with the outside world. GPTs pose similar security risks as plugins.


Yaniv Balmas, Vice President of Research, Salt Security, said: “Generative AI tools like ChatGPT have rapidly captivated the attention of millions across the world, boasting the potential to drastically improve efficiencies within both business operations as well as daily human life. As more organisations leverage this type of technology, attackers are too pivoting their efforts, finding ways to exploit these ..

Support the originator by clicking the read the rest link below.