April 7, 2026

The AI Vulnerability Surge That Doesn’t Change a Thing

Jeremiah Grossman

Blog Details Image

If there is one thing cybersecurity is good at, it is making predictions. From dystopian doom to well-intentioned optimism, the industry never lacks imagination. Always ready to sell fear and solutions equivalent to weightloss pills.

We were told passwords were going away. A decade later, we have more than ever. We had the Mirai moment, when IoT botnets were anticipated to destabilize the internet. Some attacks came. The collapse didn’t. We were told the perimeter was dead. Zero Trust would replace it. Instead, firewalls, VPNs, and network controls are still everywhere, with few true Zero Trust implementations. Platforms were supposed to eliminate tool sprawl. Most organizations now have more tools than they can manage.

These ideas and others made for fun conference keynotes. The predictions just didn’t happen.

All the way to today, most compromises still begin the same way: phishing, credential reuse, compromise of exposed and forgotten systems, and exploitation of vulnerabilities that are years old.

And now, all the prediction of vulnerability management is that AI-enabled adversaries will uncover and exploit a larger percentage of bugs, faster. We are told that we are entering a world where software flaws are discovered, weaponized, and exploited before anyone even has time to file the ticket to fix them.

At first glance, it may sound reasonable. Recent research shows AI can identify impressive numbers and types of vulnerabilities in source code and generate working exploit code from public disclosures. From one long-time offensive security researcher to another, the demonstrations are genuinely impressive.

That said, I do not agree with the conclusion that this suddenly translates into a significant new risk of widespread exploitation.

Because absolutely none of this fundamentally changes the current threat landscape. Here is the situation right now.

Read any vulnerability statistics report or spend a few minutes with a vulnerability management team at a mid-to-large organization, and you will hear the same thing. They are already carrying persistent backlogs of vulnerabilities, lagging remediation rates, and lengthy time-to-fix windows.

And yet, the vast majority of these vulnerabilities are never exploited, and the companies do not experience a material financial loss. Year after year, it’s been this way.

Reports and conversations with DFIR companies consistently reveal that only between 1–2% of all known CVEs have ever been exploited.

Adversaries do not care about some 98% of vulnerabilities. There are a couple of explanations worth considering, such as adversaries prefer their existing tooling and attack paths, or there may simply not be enough adversaries to take advantage of everything that is already exposed.

Whatever the reason, the data is clear. There’s no real evidence that most vulnerabilities are a pressing risk of financial loss to begin with.

Sure, AI will almost certainly accelerate vulnerability discovery and add to everyone’s existing backlog. But that is not the constraint. The constraint has never been vulnerability or exploit availability. It has always been selection.

The idea that AI will suddenly weaponize all of them assumes something adversaries have never demonstrated: that they want to exploit everything.

They don’t.

Only cybersecurity seems to operate on the premise that anything that can be hacked will be hacked, or everything that can be exploited will be exploited. If that were true, the last decade of hundreds of thousands of known and cataloged vulnerabilities would consistently lead to compromise.

They don’t.

Financially motivated adversaries are sentient, and they respond to incentives. They have ROI models and a business to run. They scale what works, and only a small subset of vulnerabilities meet that bar. Maybe nation-state adversaries will demonstrate a wider use of vulnerability exploits aided by AI, but relatively few organizations are in their crosshairs.

Either way, it is not that either adversary class cannot exploit a wider variety of vulnerabilities if they wanted to. It is that they do not need to.

What they are doing already works. With AI, maybe we’ll soon see 99.999% of vulnerabilities having never been exploited.

When we figure out how to consistently eliminate the small percentage of vulnerabilities that actually matter, when our defenses meet or exceed their current capabilities, only then will we force the adversaries to shift, innovate, and adapt. Until then, there is no reason for them to do so.

And when they do shift, and that’s our intent at Root Evidence, that’s the best signal that what we’re doing is working. When the adversary chooses easier targets, exploits a wide range of vulnerabilities, or shifts tactics entirely, that’s ultimately what we’re looking for.

Follow Along as Evidence Takes Shape

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.