November 20, 2025

AI and the Acceleration of CVEs

Robert "RSnake" Hansen

Blog Details Image

A vulnerability management team is already saturated with incoming software vulnerabilities. Tens of thousands of new CVEs emerge every year, around 1,000 a week. Organizations can’t even process the existing flow, because it’s simply an overwhelming amount of new vulns coming out and existing vulns that are being found. I got some flack about this statement the other day with one researcher saying “The problem may still be overwhelming but it is not 10k vulns a day.” I know that was meant to be a hyperbolically large number, but in one example of a real-world design partner, they had vulns in the 8 digit realm, well over 10M vulns, which, if you do the math gets very close to that number based on how long they have been in business. Now, to be fair, I am sure there is a lot of overlap in the vulns, but the reality is that they cannot keep up.

There may be enough patch capacity, if companies knew what to prioritize, but nothing widely in use is good enough at prioritization to help them accomplish that feat. We know that because companies are still getting hacked. Quod erat demonstrandum. 

The defenders are drowning in triage queues and patch backlogs. Now imagine what happens when AI starts generating vulnerabilities on demand. It is my conjecture that we will see an initial steep acceleration in CVE discovery volume, as large language models systematically discover every latent flaw buried in decades of legacy code. I suspect that once it gets running, the spike will not be gradual, it will look like a wall. Defenders will not be able to triage that wall of new vulnerabilities, because they can’t already, so they’ll have to rely on prioritization alone. 

Therefore, prioritization becomes everything. Knowing what not to fix is critical, suddenly, if it isn’t already.

Veracode recently published data suggesting that around forty-five percent of AI-generated code contains at least one vulnerability from the OWASP Top Ten. New code is not replacing insecure code with something safer, it is adding more exploitable material, it’s unrealized vulnerability.

So, AI not only exposes the old flaws faster, it introduces new ones. Some may cheer at the fact that Cursor and Claude Code et al are producing more code that passes QA tests and makes it into production; but I have zero confidence that that code is any safer than it was when Veracode did its tests. That means even more vulnerable code is making it into prod because it passes validation tests, not because it is more secure. So while 45% vulnerable appears to be a stable number, the volume of new code accelerates!

At first, defenders will think this is a data-management issue, a scaling problem in the security operations pipeline. But what will actually happen is epistemic collapse. The rate of new findings will far exceed the capacity of human judgment to assign meaning or priority to them. The vulnerability landscape will stop being legible. You won’t be able to find any signal unless you are running batch jobs against vulns, and whatever code you are using to evaluate it or prioritize it will have to operate with very minimal human oversight. Already it can take several minutes to process all of the JSON files in cvelistV5 on a machine I use to do just that. What if there were 10x the vulns, or 100x or 1000x? Would I only be able to run a query against that data once a day? A week? How do you prioritize against such massive numbers of vulns when your prioritization system is stack ranking everything with no clear cutoff of when you should stop fixing?

Attackers, of course, will thrive in that fog. They won't bother using the same models to mass-generate CVEs as security researchers do. They won’t tailor exploits to specific library versions, to mutate payloads automatically. Why? Because people actually making money on vulns don’t have to! Until patching vulnerabilities that attackers are actively using becomes faster, there is no sense in bothering to invest in weaponizing new vulnerabilities and reconfiguring botnets, etc.

The result will be an inversion of effort: on one hand it will take seconds or minutes to produce a new exploit but days or weeks for defenders to evaluate, assign ownership, patch, deploy and validate, even if no attacker would have ever used that vulnerability, ever. That’s not good for the defender, if we assume that bad guys even care about those vulnerabilities. But the reality is they don’t. They only care about the vulnerabilities that, for whatever reason, work for them. So this new tranche of vulnerabilities just adds more noise, for no net benefit to the defender.

Meanwhile, the attackers will keep using the handful of vulnerabilities they always would have been using, without worrying about what researchers are producing… until someone forces them to. The only way to force them to shift is to fix the things they are currently using… that adds cost to them. The asymmetry that already favors the offense will widen until traditional patching cycles become farcical. In that environment, most organizations will quietly accept chronic exposure. They will reject vulnerability management, rationally, until something better comes along.

However, after talking with some knowledgeable people about this, they brought up a good point. If you project forward a few years, the system may find an uneasy equilibrium. The initial shockwave of AI-discovered flaws may taper off, albeit while the backlog remains. However, imagine the distant future, in which AI-assisted verification and formal proofs finally yield bug-free software.

I know, I know. Please stop laughing… 🤣. But, still, such a thing is theoretically feasible.

In that world, new code is hardened by design, automatically checked, perhaps even mathematically guaranteed. What then? The attackers won’t vanish; they’ll migrate. They’ll focus on the long tail of legacy code, the forgotten binaries still running in factories, routers, hospitals, or small government systems.

The global attack surface will not disappear… it will fossilize. I don’t see the legacy code going away in my lifetime, but we may see that that is all that is left, if AI does its job. I know, this is sci-fi fantasyland stuff. But I think it is important to see what happens in the short and medium term, which is that AI-identified CVEs are going to grow at a neckbreaking pace in the next handful of years. So, prioritization is an incredibly important issue to be working on, right now.

Stay Tuned For More

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.