Machine Learning: With Great Power Come New Security Vulnerabilities

Machine Learning: With Great Power Come New Security Vulnerabilities

Machine learning (ML) has brought us self-driving cars, machine vision, speech recognition, biometric authentication and the ability to unlock the human genome. But it has also given attackers a variety of new attack surfaces and ways to wreak havoc.


Machine learning applications are unlike those that came before them, making it all the more important to understand their risks. What are the potential consequences of an attack on a model that controls networks of connected autonomous vehicles or coordinates access controls for hospital staff? The results of a compromised model can be catastrophic in these scenarios, but there are also more prosaic threats to consider, such as fooling biometric security controls into granting access to unauthorized users.


Machine learning is still in its early stages of development, and the attack vectors are not yet clear. Cyberdefense strategies are also in their nascent stages. While we can’t prevent all forms of attacks, understanding why they occur helps us narrow down our response strategies.


A Structured Approach to Machine Learning Security


Threat modeling is a security optimization process that applies a structured approach to identifying and addressing threats. Machine learning security threat modeling does the same thing for ML models. It’s used at the early stages of building and deploying ML models to identify all possible threats and attack vectors.


There are four fundamental questions to ask.


Who Are the Threat Actors?


Threat actors can range from nation-states to hacktivists to rogue employees. Each category of potential adversaries has different characteristics that require different defense/response strategies. Their reasons for atta ..

Support the originator by clicking the read the rest link below.