Architectural Analysis IDs 78 Specific Risks in Machine-Learning Systems

Architectural Analysis IDs 78 Specific Risks in Machine-Learning Systems
The new threat model hones in on ML security at the design state.

Researchers at the Berryville Institute of Machine Learning (BIML) have developed a formal risk framework to guide development of secure machine-language (ML) systems.


BIML's architectural risk analysis of ML systems is different from previous work in this area in that it focuses on issues that engineers and developers need to be paying attention to at the outset when designing and building ML systems. Most of the previous work on securing ML systems has focused on how to best protect operational systems and data against particular attacks and not on how to design them securely in the first place.


"This work provides a very solid technical foundation for taking a look at the risks associated with adopting and using ML," says Gary McGraw, noted security researcher, author, and co-founder of BMIL. The need for this kind of a risk analysis is critical because very few are really paying any attention to ML security at the design state, even as ML use is growing rapidly, he says.


For the architectural risk analysis, BIML researchers considered nine separate components that they identified as common to setting up, training, and deploying a typical ML system: raw data; dataset assembly; datasets; learning algorithms; evaluation; inputs; trained model; inference algorithm; and outputs. They then identified and ranked multiple data security risks associated with each of those components so engineers and developers can implement controls for mitigating those risks where possible.


For instance, they identified data confidentiality, the trustworthiness of data sources, and data storage as key security considerations around the raw data used in ML systems, such as training data, test inputs, and operational data. Similarly, for the datasets used in ML systems, t ..

Support the originator by clicking the read the rest link below.