April 9, 2026
Jeremiah Grossman
.png)
This may sound strange, especially to anyone working in Application Security, but I believe Application Security won. Not perfectly. Not completely. But it achieved what it set out to do.
The reason this is hard to accept is simple: we’ve been measuring the wrong thing.
Security teams today are still buried in vulnerabilities. Backlogs are enormous. Remediation lags. New software ships faster than ever, still full of flaws, with or without AI. Most deployed web applications have never been professionally tested. And now we’re told AI will discover and exploit everything faster than we can respond.
All of that is true, except for maybe the AI part.
However, if vulnerabilities were the primary driver of risk, then the explosion in vulnerability discovery over the past decade should have produced a proportional explosion in breaches and financial loss.
It didn’t.
That’s because the mission was never to eliminate vulnerabilities or achieve perfect code. That goal was never reachable, and it was never necessary. The mission has always been to detect, disrupt, and discourage adversaries before they can cause meaningful harm.
We’ve never been able to fix every flaw. Winning in cybersecurity comes from forcing the adversary to change behavior, and that’s exactly what happened.
Security only makes sense when viewed through the lens of the adversary. Not what could theoretically happen, but what actually does. Don’t focus on every flaw, but the ones they choose to exploit. Possible is not the same as probable. If it were, every CVSS 10 would be a breach.
I learned this early in my career at Yahoo in the early 2000s, one of the premier Internet companies of the time. We were under constant attack: DDoS, phishing, remote command injection, everything. It felt like we were the Internet’s test environment. But not every method, and not every vulnerability.
Adversaries had preferences. Their favorite techniques, limits to their skill and patience, preferred targets, and, of course, egos.
I know because I followed them closely. I lurked in their mailing lists, IRC channels, and even MMO servers where they gathered and plotted, a kind of early DIY threat intelligence before we had a name for it. Over time, patterns emerged. The adversaries were not exhaustive. They took the path of least resistance. This wasn’t about the art of the hack. In practice, they behaved like operators with constraints.
If Yahoo’s systems could withstand about 10 minutes of probing, they would keep going. Around 15 minutes, if they still couldn’t find a way in, they moved on to Amazon, eBay, PayPal, and others. Same era, same targets, same behavior. I talked to those guys about this regularly.
Adversaries don’t try everything; they try what works. I knew this because I knew what we were vulnerable to, and more importantly, where we didn’t get exploited and could have.
Back then, everyone installed firewalls and immediately punched holes in them for ports 80 and 443. Web traffic had to get through, so it did, and the web applications behind those ports were wide open. A single tick and semicolon could trigger an ODBC error. SQL injection. Admin=0 flags in cookies. XSS payloads firing in search fields like clockwork. Nearly every site had these issues.
It was obvious where this was heading. My bet was that web application attacks would dominate by bypassing the only real controls we had, firewalls and SSL. I founded WhiteHat Security in 2001 to get ahead of that shift.
What people forget, and I didn’t fully appreciate at the time, is that it took years before those attacks showed up in meaningful breach volume. That gap matters. The vulnerabilities were there. The exploits were known. The Internet did not immediately collapse, despite many predictions to the contrary.
In the meantime, I tracked everything: vulnerabilities found, remediation rates, time to fix, missed issues, and reported breaches. Because if you can’t measure something, you don’t understand it.
What stood out to me was that most vulnerabilities never got exploited. There was no recorded downtime. No financial loss. No lawsuits. Nothing the business could feel. And if the business can’t feel it, it doesn’t exist.
Then the shift came.
In 2005, the Samy worm took down MySpace in under 24 hours, and XSS went from theory to reality overnight. In 2008, mass SQL injection worms compromised thousands, then millions of sites. SQL Injection, technically known since Christmas Day 1998, was now center stage. In 2011, Anonymous and LulzSec targeted Sony and many others, and we at WhiteHat Security were called in to help.
And again, I saw the same pattern. Different adversaries. Different names. Same behavior. They didn’t attack everything. They attacked what they knew. They attacked what worked. Efficiency beats creativity more often than we like to admit.
I, nor anyone or any company I’ve ever spoken to, ever once encountered the mythical super hacker the industry imagines. The tireless adversary who tries every path, exploits every flaw, and never gives up. 99% of everyone is simply not going to be targeted by the so-called advanced persistent threats (APTs). The most typical and common adversaries have constraints. They have incentives. They have ROI. They scale what works and ignore the rest.
Around 2012, I started talking to cyber-insurance carriers. At the time, they didn’t have much actuarial data, but I knew eventually they would. Someone would have to measure financial loss, not just technical possibility. They would pay out claims to get that knowledge.
By 2014, web applications were getting hit constantly. The adversary had shifted. The losses were real. And then something else happened. Defenses improved. Prepared statements. Input filtering. Safer frameworks. Entire classes of vulnerabilities became harder to exploit at scale. Not impossible. Just harder. Less reliable. More expensive.
We saw it in the data. It became meaningfully more difficult to compromise systems through common web application flaws.
Meanwhile, the industry kept doing what it does best: finding more vulnerabilities. All the application security vendors competed on who could find the most. And we did. We buried poor customers in findings. Deep, obscure, blind, and triple encoded, technically impressive vulnerabilities, the kind that look great in reports and terrible in prioritization meetings, the kind that rarely matter.
Just because something can be hacked does not mean it will be hacked. That idea challenges a lot of how we’ve historically prioritized security.
By around 2016 to 2018, in my view, we had reached diminishing returns in vulnerability discovery for web application security. We could find essentially anything we wanted. The remaining problems were deployment, prioritization, and remediation, in other words, the hard parts.
What value is there in finding or fixing one more vulnerability that will never be exploited, or if it is, will never lead to meaningful loss? You already know the answer. And then the data caught up.
If you talk to cyber insurers and DFIR teams today, look at claims and successful exploits - web application attacks are not a primary driver of financial loss. This needs to be repeated. Cyber-insurance carriers see little to no financial loss as a result of web application exploitation. This is why they issue policies to companies with huge vulnerability backlogs.
Somewhere along the way, attackers moved on. Ransomware. Phishing. Credential abuse. Exposed systems. A much smaller set of vulnerabilities with reliable outcomes. Because that’s what works.
Application Security didn’t eliminate software vulnerabilities; it did something more important. It raised the cost of exploitation, reduced reliability of exploitation, and forced adversaries to be selective. It took away easy wins. When that happened, attackers adapted. They didn’t try harder. They simply moved on.
That’s the signal.
Winning in cybersecurity requires applying pressure. Not fewer vulnerabilities, but fewer that matter. Not eliminating risk, but reshaping it.
We didn’t win by fixing everything. We won by making the original attack paths no longer worth it. AI will find more vulnerabilities. It will expand the list. But it doesn’t change the constraint. The constraint has always been selection, not discovery.
Attackers don't exploit everything; they exploit what works. When that stops working, they adapt. They did.
That’s why Application Security won.