March 24, 2026
Jeremiah Grossman

Actuarial data is breaking the conventional wisdom of vulnerability management.
Meet with almost any enterprise, and you will hear the same story. I have heard it hundreds of times.
They are buried in vulnerabilities. Mountains of red, orange, and yellow findings piled high by endless tools. The backlog is larger than what they can fix now and larger than what they will ever realistically eliminate.
And somehow, these same companies are still able to obtain cyber-insurance. Real policies that pay out millions if the exploitation of one of these vulnerabilities leads to a breach and material financial loss.
Now consider this: Depending on the report you read and who you talk to, between 20% and 50% of breaches originate from a remotely exploited vulnerability. This is not theoretical. It is one of the primary ways companies actually experience a breach and financial loss.
This raises an awkward question. How can this be?
I also frequently have conversations with cyber-insurance professionals. Let me assure you, they are not a charity and they are not naive. Their entire business depends on accurately pricing the risk of financial loss. They employ actuaries, underwriters, incident analysts, and teams dedicated to understanding cyber risk.
Every day, they write policies for organizations buried in vulnerability backlogs.
They have many ways to measure risk, but what they do not do is equate vulnerability counts with business risk. They do not assume every vulnerability will be exploited. Unlike vendors, they have skin in the game. If they get it wrong, they pay for it. Literally.
So if the mantra of mainstream vulnerability management were true, that anything that can be hacked will be hacked, this system should collapse.
But it doesn’t.
Which raises the next uncomfortable question. What if most vulnerabilities do not matter nearly as much as vendors claim?
The data says they don’t.
Most vulnerabilities are noise.
There are now over 320,000 known and cataloged vulnerabilities (CVEs), yet only ~1% have ever been exploited. That percentage has remained remarkably consistent year after year. To narrow that further, and Root Evidence data shows only ~0.2% lead to insurance claims. That’s the set the cyber-insurance carriers actually care about.
And before anyone assumes this aligns with CISA KEV, it sorta does. But it also requires a lot more remediation work. The truth is the insurance carriers we speak with place very little weight on CISA KEV. The signal is clear. Exploitation is not the most predictive metric, it’s just the only one most people generically have. Which has led the industry to an inescapable reality. The wrong vulnerabilities are being fixed, and in the wrong order.
If the vulnerabilities adversaries actually exploit were consistently prioritized, the easy attack paths would disappear. Adversaries would be forced to innovate instead of scaling the same techniques across thousands of targets, and the percentage of exploited vulnerabilities would rise.
But that is not what we see.
What we see is vulnerability management being exceptional at finding vulnerabilities that do not lead to financial loss, and poor at prioritizing the ones that do.
Adversaries do not behave the way the industry currently assumes. Adversaries do not randomly exploit thousands of vulnerabilities simply because they exist. They do not need to. Financially motivated adversaries optimize for reliability, remote access, scalability, easy monetization, and operational simplicity.
This fundamental disconnect began decades ago in a different era when the vulnerability management industry was first built. It was a time when you could scan for every known vulnerability, and organizations could realistically patch everything. Backlogs were small and temporary. It did not matter if you knew exactly what adversaries would exploit, because everything was getting fixed anyway, before attacks happened.
Over time, that world morphed into the one that exists today.
Attack surfaces grew exponentially. Software deployments exploded. Vulnerability disclosures skyrocketed. And our ability to patch everything quickly, or ever, collapsed. Prioritization was no longer optional. It became the entire problem.
But we did not have real data. We did not know which vulnerabilities adversaries preferred exploiting, or which ones led to financial loss. Incident data was sparse. So the industry did what it could. It guessed.
And the safest assumption was simple. Treat everything as dangerous, just at different levels of severity.
That assumption made sense at the time. It ultimately didn’t work, but it was “safe.”
It is not anymore.
Thankfully, access to actuarial data is increasing. There simply is not very much correlation between the superset of vulnerabilities, and breaches resulting in financial loss. Current discovery prioritization models must evolve to something more grounded in real-world outcomes.
Root Evidence’s answer is to prioritize vulnerabilities in this order: