AI isn't secure, says America's NIST

The US National Institute for Standards and Technology (NIST) has warned against accepting vendor claims about artificial intelligence security, saying that at the moment “there’s no foolproof defence that their developers can employ”.




The NIST gave the warning late last week, when it published a taxonomy of AI attacks and mitigations.


The institute points out that if an AI program takes inputs from websites or interactions with the public, for example, it’s vulnerable to attackers feeding it untrustworthy data.

“No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise,” the NIST stated.


The document said attacks “can cause spectacular failures with dire consequences”, warning against “powerful simultaneous attacks against all modalities” (that is, images, text, speech, and tabular data).


“Fundamentally, the machine learning methodology used in modern AI systems is susceptible to attacks through the public APIs that expose the model, and against the platforms on which they are deployed,” the report said.


The report focuses on attacks to AI rather than against platforms.


The report highlights four key types of attack: evasion, poisoning, privacy, and abuse.


Evasion refers to manipulating the inputs to an AI model to change its behaviours – for example, adding markings to stop signs so an autonomous vehicle interprets them incorrectly.


Poisoning attacks occur in the AI model’s training phase; for example, an attacker might insert inappropriate language into a chatbot’s conversation records, to try and get that language used towards customers.

In privacy attacks, the attacker crafts questions designed to get the AI model to reveal information ..

Support the originator by clicking the read the rest link below.