'Data poisoning' with machine learning may be the next big attack vector

'Data poisoning' with machine learning may be the next big attack vector

A man walks through a Microsoft server farm in Switzerland. One researcher warned of the potential for data poisoning: adding intentionally misleading data to a pool so it machine learning analysis misidentifies its inputs.(Amy Sacka for Microsoft)

Data poisoning attacks against the machine learning used in security software may be attackers’ next big vector, said Johannes Ullrich, dean of research of SANS Technology Institute.


Machine learning is based on pattern recognition in a pool of data. Data poisoning is adding intentionally misleading data to that pool so it begins to misidentify its inputs.


“One of the most basic threats when it comes to machine learning is one of the attacker actually being able to influence the samples that we are using to train our models,” said Ulrich, speaking during a keynote at the RSA Conference.


Ulrich noted that hackers could provide a stream of bad information by, say, flooding a target organization with malware designed to refine ML detection away from the techniques they actually plan to use for the main attack.


The future threats panel offerred four experts taken from the SANS Institute instructor pool the opportunity to present on one ..

Support the originator by clicking the read the rest link below.