Redefining How to Measure the Success of Your Vulnerability Management Program

Redefining How to Measure the Success of Your Vulnerability Management Program

Could your team be wasting its time reporting vulnerability metrics that don’t matter? Security teams often fall into the trap of reporting  security metrics that don’t actually matter to the business. In this post, we’ll discuss which vulnerability risk management metrics matter and which ones don’t, and how to communicate them effectively to the business.


Moving away from security metrics that aren’t helpful


Metrics like a count of unpatched vulnerabilities or number of assets assessed may sound important at first glance but are of very little use to non-technical stakeholders and the greater business. First, because most executives, board members, and other teams don’t understand this metric in context and, second, it’s not actionable to them.


Another metric we often see security teams lean on is CVSS score. While it’s a baseline metric to understand a vulnerability, it needs to be taken into consideration alongside metrics like malware exposure, exploit exposure, and vulnerability age to be useful in actually prioritizing vulnerabilities for remediation. CVSS alone simply doesn’t offer enough context and can leave you with thousands of “critical” vulnerabilities to fix, which isn’t helpful to your security team nor is it easily communicable to other business counterparts.


The third category of metrics are those outside your security team’s control, such as reporting on an aggregate risk score. Wait... what? That’s how I’ve been doing this for years. Yup—but stick with me on this—what if right before you go into your board meeting to report your security metrics there is a huge Patch Tuesday? Your aggregate risk score is going to sp ..

Support the originator by clicking the read the rest link below.