April 21, 2026

It’s Not Their Fault (they did the best they could at the time…)

Greg Reber

Blog Details Image

How Tenable, Qualys, and Rapid7 built the right tool for a world that no longer exists

In 2025, the security industry published 48,448 new vulnerabilities. Not all of them affect every organization. Most do not. But the VM platforms your security team relies on start with everything theoretically applicable to the environment, surface a list that makes a phone book look like CliffNotes, and call it vulnerability management. Before you blame Tenable, Qualys, or Rapid7, understand this: a long time ago, they built rationally on top of a foundation that is now outdated.

The problem is that nobody questioned the foundation while the checks kept clearing.

Think of the example of a car alarm installed in 2002 to prevent car theft. It was state-of-the-art. It detected motion, sound, changes in barometric pressure, and maybe even a strongly-worded glance from a golden retriever.

Every time it goes off, your insurance policy requires you to investigate. You have investigated 4,000 times this year. Three of those times, there was actually a problem. The other 3,997 times it was wind, or a shopping cart, or the retriever again. The alarm does not know the difference between wind and a car thief, it was never designed to. The alarm was designed to detect everything that could possibly indicate a car thief, and in this mission, it is performing flawlessly – now alerting you 4,000 times per year. That is vulnerability management designed 20 years ago, but applied to today’s world. The platforms built then were not wrong,but now they’re outdated.

And they’re starting to use AI to make their flawed approach even faster!

The Only Data Available Was Theoretical

When Qualys launched in 1999, when Tenable shipped Nessus commercially in 2002, and when Rapid7 built out InsightVM over the following decade, there was no mechanism to distinguish between a vulnerability that real attackers were actively exploiting and one that was theoretically exploitable under conditions so specific they might never occur in practice. Both went on the list. The list was the product.

The National Vulnerability Database and its CVSS scoring framework launched in 2005. A genuine achievement: for the first time, a government-backed database correlated known vulnerabilities with standardized severity scores. But CVSS evaluates what an attacker could theoretically do in an ideal scenario. It does not evaluate what attackers are doing in your network on a Tuesday afternoon. It was the only signal available, so the platforms used it. A CVSS 9.8 went to the top of the list, and a CVSS 3.8 was ‘informational’. The logic was sound, but the inputs were incomplete, likelihood, for instance, was entirely absent.

What the List Looks Like for a Medium-Sized Company

Run any of these platforms against a medium-sized company with 1,000 to 2,000 assets, and the scan returns somewhere between 5,000 and 20,000 total vulnerability findings, depending on what software the organization runs and how well it patches. That is not a flaw in the scanner. It is a faithful accounting of every known CVE that applies to every piece of software detected. The scanner is doing exactly what it was designed to do.

Filter that output to vulnerabilities tagged AV:N (network-reachable) and PR:N (no authentication required), and you are looking at roughly 40 to 50 percent of findings. On a mid-market scan, that is 2,000 to 10,000 vulnerabilities a remote attacker could theoretically exploit from the internet without a username or password. Your security team is expected to triage and remediate that list. They also have ten other things to do.

To be precise about what that number does not mean: a vulnerability in a Cisco router matters only if you have Cisco routers exposed to the internet. A flaw in Apache matters only if your servers run that version and that version is reachable. The scan finds the intersection of the global CVE catalog with your environment. The resulting list is narrower than the full universe but still enormous, because modern environments run an enormous amount of software, much of which has known flaws. Sorting the pile by CVSS score, about all most teams can do at that volume, is roughly equivalent to deciding which fires to fight by the height of the flames rather than which building they are in, or what’s in it.

The World Changed. The Architecture Did Not.

In the 2000s, vendors published roughly 4,400 CVEs per year. By the 2010s, the count nearly doubled to 8,400. The 2017 expansion of the CNA program sent it rocketing past 14,000 in a single year. Eight years later, by 2025, the figure had more than tripled to  48,448. The scan results a medium-sized company sees today are roughly three times larger than a decade ago.

CISA launched the Known Exploited Vulnerabilities catalog in 2021, the first structured list of vulnerabilities with actual evidence of real-world exploitation. Not theoretical severity. Confirmed exploitation, in the wild. As of this writing, the KEV catalog contains roughly 1,560 entries accumulated over years of documented attacks. The total CVE database contains over 300,000. That is one actively exploited vulnerability for every 220 theoretical ones. A medium-sized company running Tenable or Qualys might have 20 to 60 of those KEV entries in its environment at any given time. That is one of the lists that actually matter, but that is not what the discovery platform leads with.  

This is because the platforms were designed when the number of vulnerabilities they looked for determined their value.

The platforms responded by adding layers. EPSS scores, KEV flags, threat intelligence feeds, all bolted on top of a system designed to report theoretical risk. The result is a dashboard that surfaces thousands of findings, flags a few dozen as known-exploited, assigns some score based primarily on CVSS to all of them, and leaves the security team to reconcile the signals.

Tenable, Qualys, and Rapid7 are full of smart people who understand the problem. But the architecture of their platforms, the pricing models built around asset counts, the customer expectations shaped by two decades of theoretical comprehensiveness as a product - all of it makes the pivot genuinely difficult. You simply can’t tell 43,000 paying customers that the list is now the wrong list.

So instead, you add a filter.

The filter helps. It does not solve the problem. The problem was baked in around 2002, when the only data available was theoretical, and the platforms made the rational decision to treat all risks as equally possible. That made complete sense then. It makes progressively less sense every year as the CVE count climbs and the evidence base for what attackers actually exploit grows richer.

The platforms were built for a landscape that no longer exists. The world has moved on…

Evidence Scan is free for enterprise companies to preview.