Mapping attacks on generative AI to business impact


In recent months, we’ve seen government and business leaders put an increased focus on securing AI models. If generative AI is the next big platform to transform the services and functions on which society as a whole depends, ensuring that technology is trusted and secure must be businesses’ top priority. While generative AI adoption is in its nascent stages, we must establish effective strategies to secure it from the onset.


The IBM Institute for Business Value found that despite 64% of CEOs facing significant pressure from investors, creditors and lenders to accelerate the adoption of generative AI, 60%  are not yet developing a consistent, enterprise-wide approach to generative AI. In fact, 84% are concerned about widespread or catastrophic cybersecurity attacks that generative AI adoption could lead to.


As organizations determine how to best incorporate generative AI into their business models and assess the security risks that the technology could introduce, it’s worth examining top attacks that threat actors could execute against AI models. While only a small number of real-world attacks on AI have been reported, IBM X-Force Red has been testing models to determine the types of attacks that are most likely to appear in the wild. To understand the potential risks associated with generative AI that organizations need to mitigate as they adopt the technology, this blog will outline some of the attacks adversaries are likely to pursue, including prompt injection, data poisoning, model evasion, model extraction, inversion and supply chain attacks.



Security attack types depicted as they rank on leve ..

Support the originator by clicking the read the rest link below.