IDG Contributor Network: Why I’m not sold on machine learning in autonomous security

IDG Contributor Network: Why I’m not sold on machine learning in autonomous security

Tell me if you’ve heard this: there is a new, advanced network intrusion device that uses modern super-smart Machine Learning (ML) to root out known and unknown intrusions. The IDS device is so smart it learns what’s normal on your network and not, immediately informing you when it sees an anomaly. Or, maybe it’s an intrusion prevention system (IPS) that will then block all malicious traffic. This AI-enabled solution boasts 99% accuracy detecting attacks. Even more, it can detect previously unknown attacks. Exciting, right?

That’s an amazing sales pitch, but can we do it? I’m not sold yet. Here are two big reasons why:

The above pitch confused detecting an attack with detecting an intrusion. An attack may not be successful; an intrusion is. Suppose you detected 5 new attacks, but only 1 was a real intrusion. Wouldn’t you want to focus on the 1 successful intrusion, not the 4 failed attacks?
ML-enabled security may not be robust, meaning that it works well on one data set (more often than not, the vendor’s), but not another (your real network). In a nutshell, an attacker’s job is to evade detection, and ML research has shown it’s often not hard to evade detection.

Put simply, ML algorithms are not generally intended to defeat an active adversary. Indeed, academic research areas in adversarial machine learning is still in its infancy, let alone real products with ML technology. Make no mistake — there is amazing research and researchers, but I don’t think it’s ready for full autonomy.

To read this article in full, please click here



Support the originator by clicking the read the rest link below.