January 26, 2026
Robert "RSnake" Hansen

This post builds directly on several earlier pieces. If you haven’t read the following posts yet, I strongly recommend starting there.
Peak Patch Management and Busy Work
The more time I spend in cybersecurity, the more I see misaligned incentives everywhere. Sometimes it’s egregious, sometimes it’s subtle, but ultimately, all of them make us less secure. I decided to write down a few of the poor incentives I have seen thus far, in the hope of starting to tackle them. But before I get there, let’s dispel a couple of myths.
The first is the concept of “risk-based security”. I love the idea, but in practice, there is unfortunately no such thing. To know risk to your company, you must know the likelihood of loss and since no one tracks or has any meaningful way to know likelihood, there is no such thing. Risk = cost of the bad thing happening times likelihood. If you don’t know probability, you cannot calculate risk. Therefore, nothing in the market is actually measuring it. It reminds me of the old days when we decided to use the word CAPTCHA to describe something that by definition isn’t a CAPTCHA. It’s not a Completely Automated Public Turing test to tell Computers and Humans Apart, it’s a test that distinguishes a narrow group of humans that are conscious, can speak the language, are intelligent, and have working eyes, etc. against computers that aren’t programmed well enough to bypass the tests. That is not a CAPTCHA, even if we call it one. Risk-based security likewise… isn’t.
Risk-based security does not exist in practice, and since it doesn’t, I see more and more companies trying to build it internally, which is a pretty huge indictment of our industry’s ability to build meaningful risk reduction into security products. In the absence of measurement, vendors guess, and they do so poorly. That’s provable because we continue to see massive losses as an industry. We guess with colors, grades, scores, and decimal points that feel quantitative but are in fact subjective. There are a few valiant efforts to do better, like FAIR, but due to the complexity of the deployment of FAIR, it has not been nearly as successful as it could have been.
To use a simplistic model, a vulnerability rated CVSS 9.2 means nothing if it is attached to an asset worth nothing, and it means far more if it is attached to the one system that actually matters, but context is never present. And it matters even more if adversaries are actively using said vulnerability.
When Jeremiah and I got started, there really wasn’t a market for infosec, so there was no money in it. Now we have built entire pipelines of cybersecurity people coming out of universities, and we began to hear things like the employment gap in infosec. It’s another fallacy. We cannot and will not ever hire our way to secure.
First of all, money chased passion out of the room. We stopped hearing people’s visions and zeal for security and started hearing about the market, pay scales, and reporting structures. We no longer have the same kind of passion in our industry that we used to have back when it began. Worse yet, the people who used to all be extremely passionate have gone on to do other things, like management, retirement, or entirely exited the industry. Mentorship vanished and was replaced with certification factories. The average practitioner is asked to operate systems they don’t understand with incentives that do not reward curiosity. Therefore, even if we could hire more people into these roles, we wouldn’t want them, because they don’t have the desire or passion for the role. Most are button-pushers, and that role is best automated out of existence as soon as possible. So the companies that survive will be the ones that remove the need for these “button masher” employees in favor of automation, because automation is wildly cheaper.
In the meantime, executives still need to hire, so they will scrape the bottom of the barrel for anyone with a hint of experience and stay only long enough to blame the last guy for the state of affairs when the company is breached. The CISO knows these people aren’t the best and brightest, but they are evaluated on their ability to close the headcount gap, and since there is no measuring stick to know if a candidate is good or bad, the board simply has to take it on face value that they got the best. Executives, meanwhile, know they’ll get breached because they know nothing works, and their team is largely composed of inexperienced staff.
When we were working at Bit Discovery, one of the weirdest conversations we would have was where executives would fear the results because they knew we’d find a lot more once we had a better understanding of their attack surface. They didn’t want to find more vulnerabilities; they were already drowning in them. It made selling EASM hard, because we represented more risk. Not in the hacker sense of risk, but in the sense where if they know about a vulnerability and don’t do anything about it, and it leads to loss, they could be held responsible. The risk wasn’t to the company, it was personal liability, which creates a bad incentive. Liability follows knowledge, not ignorance, so discovery itself becomes dangerous. This is why entire categories of tooling that find more assets and more vulnerabilities are resisted at the top, even as they are praised in public by researchers who know it works.
We tell ourselves the internet should be fully scanned. It can’t be with today’s tooling. Organizations don’t know where everything is, unless they have already invested in EASM. The vendors are too expensive and in some cases, actually exceed the cost of breaches when run against the totality of the internet-facing assets. Therefore, no budget can cover full scanning with today’s unit-cost of scanning. Even if full scanning were possible, the increase in volume of vulnerabilities that never lead to breach or loss would overwhelm teams that already cannot keep up. This is their own words, by the way… this is what VM teams are telling us right now.
The core issue is that stoplight infosec, with the red/yellow/green grades or 0-10 numeric scores based on highly suspect inputs, are lacking the context, when context is precisely what you need to make decisions on prioritization. Since the vendors cannot solve this problem and don’t even bother addressing it most of the time, the customers are left to clean up the mess with sometimes tens of millions (yes, really) of vulnerabilities that they cannot prioritize, and they know full well that the vendors haven’t bothered to help. Why should they? The vendor already sold the license, and they know none of the other vendors have bothered to address this issue. So vendors happily report raw CVSS base scores as fact and shrug when customers are drowning in badly prioritized risk.
Patching has also reached what we refer to internally as peak patching. We have run out of resources because our ability to patch has always been limited by our complexity and our staff, and for a variety of reasons, we have pushback on automation. That tension is almost always related to worries about outages, and as we get more complicated, we are more likely to suffer catastrophic unforeseen outages.
Vulnerability management vendors are rewarded for volume, not relevance to the business, let alone risk to the company. In competitive evaluations, the scoreboard favors whoever finds the most issues, regardless of whether the resolution of those issues will reduce risk. A tool that reports fewer findings because it understands context looks weak next to one that floods the page with noisy alerts. This trains buyers to equate quantity with diligence and trains vendors to optimize for spectacle rather than insight. I have been in a number of vendor evaluations where people only ask, “Why didn’t you find what the other guy did?” In one example, years ago at Whitehat, a competitor found 100 pages of false positives, and we almost lost that deal because we didn’t find the false positives too.
At the bottom of the market, the incentives become even more distorted. Low-cost vendors survive by amplifying urgency, or even becoming business roadblocks themselves, as if curing your erroneously assigned risk grade somehow makes you a better business. They find everything they can… high risk, low risk, imaginary risk, and then compress it all into the same grade, as if that somehow is useful or meaningful in any context. Everything is critical to these vendors because fear closes deals. Precision would require saying some things do not matter and that honesty would reduce revenue, probably to zero, due to the absolute dearth of valuable vulnerability data.
DFIR firms face a different incentive maze. They are selected by breach coaches whose primary concern is cost containment and predictability. Speed of results, keeping costs down, and minimal disruption to the company trump deeper forensic clarity of the initial access vector. The best outcome for the firm is to be invisible, efficient, and inexpensive, when in reality that is not helping the ecosystem at all, because we never do find out what the adversary has been up to. Many of these firms are so bad, I don’t know how we could ever trust their clients to tell the consumer what happened, since there is no way for them to know. In many cases, the DFIR firms don’t even look at all!
Assessors like QSAs sit in an almost impossible position. Qualified Security Assessors are nominally enforcers of a standard, but in practice, they are paid by the entities they evaluate. That is a moral hazard by definition. The rational behavior is to interpret requirements generously to shield clients from failure, and only in some very specific cases will the banks using the results push back and not accept the QSA on face value. For the QSA to push back on the customer’s results, or worse yet the standard itself would threaten the very ecosystem that sustains them.
None of these actors are malicious. Each is responding completely rationally to the game they are placed in, but the combined effect is paralysis. Every participant benefits from maintaining the current narrative while collectively acknowledging, at least in private, that it does not work. Change threatens too many revenue models at once.
This is why progress feels impossible. Any attempt to introduce real risk-based thinking collides with incentives that punish it or are at least at odds with it. Fewer findings and less patching looks like negligence. Deeper DFIR investigations look like inefficiency. This is Infosec Moneyball. We have to start thinking like this.
And yet the system is already cracking. Insurance pressure, litigation, regulatory scrutiny, and plain financial loss are beginning to override these incentives. We know it because all of the people with bad incentives are talking to us, like we’re priests at the confessional and admitting that these incentives aren’t working at all anymore. It’s like the parasite has sucked too much blood and the host is now dying. I can’t describe the sensation I get when I talk to these players in any other way. The only way to save the host is to start spending less, and doing more.
The Infosec Moneyball math of expected loss does not care about bakeoffs or scorecards. Capital investment eventually demands coherence and facts to back it up, especially when belt-tightening is being considered across the organization. The path forward is not a single reform or a new framework. It is the slow dismantling of perverse incentives one by one. This will be uncomfortable because it requires people to win less in the short term to survive in the long term. But the alternative is likely worse: Insurance eats the world and tells us all what to do, since we are clearly not the adults in the room. We become the home inspectors, and they, the policy makers. The system we have built cannot correct itself automatically. From what I can tell, the Cyber Insurers don’t want to have to do this, and no one I have spoken to in Infosec wants it either, at least not once they understand what it means for their livelihoods. This change in action and thinking has to be done with all of these incentives in mind.
Yes, this means there are likely a lot of dead infosec companies walking. They simply aren’t paying attention to this sea change that’s happening. They don’t get that their incentives need to change. They don’t understand that there is no room for fluffy spending anymore. They aren’t retooling and re-thinking, and sadly, my prediction is this will mean a lot of doors will close in the next decade. If you want to stay employed, you need to start paying attention to these incentives and focus on what happens when unit-cost and breach/loss reduction become the only metrics. That is where the real incentives are, because like every industry, Infosec isn’t independent of the capital markets, and like every industry, the winners end up dropping the price and increasing value to kill the competition. If your company isn’t doing both, a reckoning is coming.