November 13, 2025
Robert "RSnake" Hansen

What started as a post that was meant to call out the infosec industry for its long-standing failures and how subrogation lawsuits from the Cyber Insurance industry were going to become a reckoning, turned into an interesting conversation with Shubhas Menon in the comments section.
His question was: has the market finally discovered a functional feedback loop for cyber risk?
His point was that if cyber insurance is now suing vendors for negligence, effectively pricing our incompetence, then yes, the market is beginning to wake up to reality. For the first time, there’s an external mechanism forcing accountability on an industry that has mostly operated on reputation, fear, and theater.
While uncomfortable, it’s healthy. Yes, it will no doubt be the death of many shoddy infosec practices and companies if they don’t adapt. Some will be afraid of that, but I think it’s incredible progress playing out before our very eyes.
But as Shubhas pointed out, this new accountability loop comes at a cost. Someone has to bear it, and he believed the cost would just get passed back to the organizations who buy these products and services. When insurers start quantifying risk and demanding measurable controls, it won’t just end with the security vendors, it will cascade downstream into higher premiums, inflated compliance regimes, and the cost will trickle down to the organizations. He had a good point, but there’s one more factor at play here.
The answer is unit economics.
Security has been shielded from true market discipline for too long. We’ve gotten used to selling complexity, opacity, and recurring consulting hours. But the moment insurers, investors, and boards begin treating cyber risk as an actuarial problem, we then need to act in accordance with what is actually good for the business, not just padding the infosec sales quota.
The survivors in this new ecosystem will be the ones who can drive down the unit cost of protection. Not through another checkbox framework or yet another next-gen solution, but through real, measurable efficiency and influence in terms of loss avoidance. Or at a very minimum, not creating more liability with their existence, which you’d think isn’t that difficult of a task, but I’m not so sure when we are seeing security hardware become a favorite target for attackers trying to breach networks externally.
I have this working theory that companies aren’t worth what they think they are. There is vulnerability of a kind that will lead to losses if you fast forward the timescale far enough. This ticking time-bomb is what I refer to as “unrealized loss”. This loss only happens when the attackers choose to exploit the vulnerability. So who should bear that downside? Should it be the company, or the insurance provider? I would argue it’s the insurance provider, and played out over a long enough time horizon, they’re going to want to pass that cost back to the company. Which means the company needs to find a way to reduce the cost of finding and fixing those issues so that insurance is no longer triggered at all, or if it is, the claim sizes are small.
Subrogation lawsuits aren’t the only market force that will create this budgetary tightening regime. AI, automation, and increasing competition will make this unavoidable.
We ran into one company that said that they don’t scan everything because the cost of scanning has become larger than the cost of a breach. Imagine how over engineered we have become where that is our reality. The marginal cost of defense must start trending downwards, or it will simply be too expensive to afford.
Said another way, if your security solution is expensive, it’ll get priced out of the market, and you will be looking for work. So it’s better to start thinking about this now.
Some might see this as pessimism, but I view the whole situation as a beautiful evolution. Every industry eventually converges on cost efficiency as its final arbiter. Infosec (apparently) is no exception.
So yes, cyber insurance may be the mechanism that finally introduces a type of functioning feedback loop… software liability of a kind. At the moment it might feel like a punitive feedback loop, but it’s really just a corrective one, based on actuarial data. It will force us to reckon with the economic realities we’ve ignored for decades. If we’re smart, we’ll use this moment, not to cling to our old myths and broken systems, tired compliance mandates, etc., but to build a future where security is scalable, provable, and priced like any other utility.
Because at the end of the day, survival in this industry, as in any ecosystem, is a function of the fully loaded cost. In the case of vulnerability management that is the unit cost of scanning each asset plus the hidden cost of managing the scanner, triaging the output, and fixing the issues identified. Let me walk you through it:
So, security products have to work as advertised and their cost must be less than whatever the thing is they claim to prevent. Of course many vendors will skip the first bullet entirely and avoid making any claims at all about the efficacy of their products. However, in that case, customers will know that the vendors do not stand behind the technology, and that is easy enough to identify. What this really means is that the industry must drive down the unit cost of scanning in the case of vulnerability management, while increasing the efficacy of the scan results.
This is not an easy challenge, but it’s the way things have to be for security products to survive in this new market. Of course the alternative is that vendors maintain the status quo. Survival is optional, I suppose.
