Data Poisoning: When Attackers Turn AI and ML Against You

Data Poisoning: When Attackers Turn AI and ML Against You

Stopping ransomware has become a priority for many organizations. So, they are turning to artificial intelligence (AI) and machine learning (ML) as their defenses of choice. However, threat actors are also turning to AI and ML to launch their attacks. One specific type of attack, data poisoning, takes advantage of this.


Why AI and ML Are at Risk


Like any other tech, AI is a two-sided coin. AI models excel at processing lots of data and coming up with a “best guess,” says Garret Grajek, CEO of YouAttest, in an email interview.


“Hackers have used AI to attack authentication and identity validation, including voice and visualization hacking attempts,” he says. “The ‘weaponized AI’ works to derive the key for access.”


“Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset,” researchers from Cornell University explain.


What makes attacks through AI and ML different from typical  ‘bug in the system’ attacks? There are inherent limits and weaknesses in the algorithms that can’t be fixed, says Marcus Comiter in a paper for Harvard University’s Belfer Center for Science and International Affairs.


“AI attacks fundamentally expand the set of entities that can be used to execute cyberattacks,” Comiter adds. “For the first time, physical objects can be now used for cyberattacks. Data can also be weaponized in new ways using these attacks, requiring changes in the way data is collected, stored, and used.”


Human Error


To better understand how threat actors use AI and ML as an attack vector for data poisoning and ..

Support the originator by clicking the read the rest link below.