Microsoft and MITRE, in collaboration with a dozen other organizations, have developed a framework designed to help identify, respond to, and remediate attacks targeting machine learning (ML) systems.
Such attacks, Microsoft says, have increased significantly over the past four years, and are expected to continue evolving. Despite that, however, organizations have yet to come to terms with adversarial machine learning, Microsoft says.
In fact, a recent survey conducted by the tech giant among 28 organizations has revealed that most of them (25) don’t have the necessary tools to secure machine learning systems and are explicitly looking for guidance.
“We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations,” Microsoft says.
The Adversarial ML Threat Matrix, which Microsoft has released in collaboration with MITRE, IBM, NVIDIA, Airbus, Bosch, Deep Instinct, Two Six Labs, Cardiff University, the University of Toronto, PricewaterhouseCoopers, the Software Engineering Institute at Carnegie Mellon University, and the Berryville Institute of Machine Learning, is an industry-focused open framework that aims to address this issue.
The framework provides information on the techniques employed by adversaries when targeting ML systems and is primarily aimed at security analysts. Structured like the ATT&CK framework, the Adversarial ML Threat Matrix is based on observed attacks that have been vetted as effective against production ML systems.
Attacks targeting these systems are possible because of inherent limitations underlying ML algorithms and require a new ap ..