October 13, 2025

The Epoch Theory of Cybersecurity

Jeremiah Grossman

Blog Details Image
Epoch (noun): A distinct period of time marked by notable characteristics, events, or prevailing conditions that set it apart from other periods.

Introduction

Everyone in cybersecurity shares the same obsession: the future. We discuss where technology is heading, debate about what adversaries will exploit next, and argue how defenses must evolve. Those who can predict that arc before it arrives gain a decisive advantage. They can prepare before the attacks come, invest where the impact multiplies, and force adversaries to spend far more time and money than they do. In the economic warfare of cybersecurity, that is the winning strategy.

Wayne Gretzky, the greatest hockey player in history, once said, “Skate to where the puck is going, not where it has been.” 

For more than 25 years I have lived that mindset in cybersecurity, anticipating where the field was headed and building ahead to meet the threats that I see on the way. I was there at the dawn of application security and founded WhiteHat Security when many dismissed the idea that adversaries would ever bother with the application layer – until they did. Next, I sounded the alarm about ransomware before it became a household and business crisis, joining SentinelOne early to focus on ML-enhanced Endpoint Detection and Response. And then I recognized the growing importance of attack surface management as adversaries scaled their operations to exploit unknown corporate assets, which led to the launch of Bit Discovery.

Prediction after prediction, innovation after innovation, success after success, through these companies and many others I have had the privilege to be involved with, I have helped protect thousands of businesses and, by extension, millions of people. I often get asked, “How did you call that one?” These were not lucky guesses. They were reasoned, evidence-based predictions grounded in a strategic model I have quietly developed with highly trusted colleagues over decades. I’ve named the model, The Epoch Theory of Cybersecurity. It is a deep study into adversary incentives and evolving technologies, tested and refined, but never before shared publicly.

Accurately predicting the future of cybersecurity is not about chasing every theoretical risk. What is possible is not the same as what is probable, a principle that must never be ignored. Simply because something can be hacked, or hacked in a particular way, doesn’t mean it will. 

There is another truth too often forgotten in our field: adversaries are human, not an act of nature. They chase incentives, adapt to obstacles, and repeat what works until something forces them to change. 

Understanding these principles has heavily guided my product vision, investments, and research, and it has proven right time and again. The ultimate goal is simple: shift the balance of time and money against the adversary. That is how we win. 

And some do, in fact, win.

Adversary Fundamentals

The first step in anticipating where the ‘cybersecurity puck’ will be is recognizing that we are not identifying and fixing risks as an academic exercise. The goal is to stop bad people from doing bad things, to both individuals and organizations. 

As cybersecurity professionals our job is to make those incidents rare and to limit the damage they cause to life and business. At its core, risk management is about understanding our adversaries and countering them.

  • Cybersecurity’s goal is to stop bad actors, not just fix risks on paper.
  • Incidents will still happen, so the aim is to make them rare and limit damage.
  • Success depends on understanding and countering adversaries.

Adversary Classification

Every adversary type behaves differently, but they all share one fundamental truth: they are sentient. They have purpose, motivations, and respond to incentives. Their actions are guided by rational self-interest and shaped by the market forces they operate in. 

Hacktivists are ideologically driven. They use digital tools as a form of protest or political expression, choosing symbolic targets such as governments, corporations, media outlets, or organizations they see as culpable for some wrong-doing. Their goal is not profit but disruption, exposure, and messaging. Most rely on off-the-shelf “exploits, botnets, and phishing kits. Tactically, they favor high-visibility actions like DDoS attacks, website defacements, data leaks, or doxxing. While they lack the sustained resources of cybercriminals or nation-states, their opportunistic and loosely organized campaigns can still cause serious reputational and operational damage when intent and timing align.

Cybercriminals are financially motivated above all else. They select targets based on potential monetary gain and ease of exploitation, going after individuals, businesses, and institutions with valuable data or systems they can ransom, resell, or otherwise monetize quickly. They rely on proven techniques such as credential theft, business email compromise, ransomware, fraud, and large-scale data breaches. They repeat what works until defenses improve and then adapt rapidly. Their operations are professionalized, scalable, and constantly evolving, making them the most persistent economic threat in cyberspace.

Nation-state adversaries operate to advance geopolitical objectives rather than profit. They select targets that support national interests such as intelligence gathering, economic advantage, or disrupting rival governments and critical infrastructure. Backed by state funding, research, and intelligence support, they have the resources to conduct operations that are highly sophisticated, stealthy, and long-lasting. They excel at advanced persistent threats, using custom malware, zero-day exploits, and carefully planned supply chain or insider compromises. They choose targets strategically, primarily seeking information, not financial gain.. They prioritize patience, persistence, and deniability, often maintaining footholds in networks for years. 

  • All adversaries are sentient: guided by purpose, incentives, and self-interest, which shape their targets, tactics, and persistence.
  • Hacktivists: Ideology-driven; aim for disruption and publicity using common tools and timed, high-visibility attacks.
  • Cybercriminals: Profit-driven; pragmatic and adaptive, using scalable techniques like ransomware, phishing, and fraud for quick gains.
  • Nation-states: Geopolitically driven; patient, well-funded, and highly sophisticated, focusing on strategic long-term targets.

Adversary Psychographics

Understanding the psychographics of a cyber adversary is critical because the antagonists are not faceless attackers. They are sentient actors with purpose. They have motivations, preferences, and tradeoffs. Their behavior is driven by incentives and constrained by risks. Most of their choices come down to rational self-interest within a market of opportunity. When defenders understand that, they can better anticipate which targets adversaries will go after, which tactics they will use, and what deterrents might alter their decision making. This insight lets us move beyond purely technical defenses and focus on shaping the adversary’s cost-benefit calculus. The goal is to make attacks less attractive, less profitable, or riskier, which in turn reduces both the likelihood and the impact of breaches.

Core attributes

  • Motivations are instrumental. Whether ideological, financial, or geopolitical, the goal is to achieve outcomes that advance the actor’s interests.

  • Decision making is economical. Choices are governed by cost benefit analysis that weighs effort, time, risk of exposure, and expected reward.

  • Risk tolerance varies. Some actors accept high risk for the possibility of high reward, while others trade speed for stealth.

  • Resource levels differ. Skill, tooling, and funding influence what is feasible, but even low resourced actors exploit easy opportunities.

  • Time horizons range from minutes to years. Short horizon actors focus on quick monetization, while long horizon actors invest in persistence and reconnaissance.

Common patterns of action

  • They exploit what works. Successful techniques are repeated until defenders make them uneconomical.

  • They follow the path of least resistance. Low friction, high yield targets attract more activity.

  • They adapt. When costs rise or avenues close, they innovate or shift vectors.

  • They operate in ecosystems. Access brokers, tool marketplaces, and affiliate networks concentrate expertise and scale operations.

Adversary Options

For defenders, real progress is visible when adversaries can no longer rely on the same old playbook and are forced to innovate or migrate. That moment, when cost and risk compel attackers to change course, is the clearest signal that defenses are working. The defensive objective is to raise the marginal cost and increase the visibility of successful attacks to the point that the attacker’s simplest options become unattractive.

Because of their psychographic profiles, most adversaries favor a small set of reliable entry points such as business email compromise, phishing, vulnerable web applications, exposed cloud services, and other low-friction opportunities. They repeat what works until defenders make it uneconomical. Once a given attack vector no longer delivers enough reward because defenses have caught up, attackers face four basic choices.

Stop: They abandon the campaign or tactic because the return no longer justifies the effort or risk.

Focus on softer targets: They keep using the same technique but shift to easier victims who have yet to close the gap.

Innovate: They refine tooling, develop new techniques, or combine methods to get past the improved defenses in the same attack vector.

Migrate: They move to a different vector where barriers are lower or rewards are greater.

Epoch Overview

In the Epoch Theory of Cybersecurity, an epoch is a distinct phase in which certain attack methods, defensive postures, and economic incentives dominate the landscape. Each epoch represents a new level of sophistication, progressing from 1 through 4, with its own technical characteristics.

Adversaries often drive the shift from one epoch to the next, but defenders can force transitions too when defensive technologies advance faster than offensive techniques. As threats and incentives evolve, so must security controls. The progression moves from ad hoc manual efforts to repeatable procedures, then to scaled operations, and finally to highly complex models where adaptation itself becomes the defining feature.

Epochs apply across domains: attack vectors, industries, regions, and even individual organizations. Adversaries and defenders do not always operate in the same epoch at the same time. Different capabilities and maturity levels create mismatches. Knowing which epoch you are in helps focus investment, clarifies priorities, and allows you to anticipate what attackers are likely to try next and when a shift is on the horizon.

Epoch 1: Manual

Adversaries look for weaknesses by hand, probing systems one at a time. Defenders respond in kind, manually finding and fixing each issue as it is discovered. Both offense and defense are limited by human effort and attention.

Epoch 2: Repeatable

Attackers codify their methods into scripts and simple tools so they can repeat the same process consistently across many targets. Defenders respond by adopting standardized patching and assessment routines. Target selection, however, remains a manual choice, making campaigns repeatable but not yet fully automated.

Epoch 3: Scale

Automation and global connectivity allow attackers to launch large-scale campaigns against vast numbers of systems with little concern for any single target. Defenders respond by scaling their scanning and detection capabilities to keep up, prioritizing the vulnerabilities that matter most amid an overwhelming flood of data.

Epoch 4: Complexity

Attackers layer sophistication onto their scaled operations, using obfuscation, chained exploits, and adaptive tactics to find and exploit more opportunities. Defenders, unable to patch fast enough to keep up, counter with their own layers of complexity such as segmentation, deception, and proactive disruption to contain attacks before they can take hold.

Epoch 1 (Manual)

When we think back to the earliest days of the Internet, the mid to late 1990s, it was a very different world. In what we now call cybersecurity, most tasks were performed manually. If you wanted to check whether a port was open, you launched a terminal window, typed telnet, and sent payloads to see what would happen. A few tools existed, but they were crude, single-purpose utilities. Yet, it worked because the Internet itself was far simpler and far less populated. 

The list of known vulnerabilities was short enough to memorize. An adversary could try the handful of exploits they knew and one of them would usually work. To defend, a lone administrator might subscribe to Bugtraq, read an advisory, and then manually apply a patch. Once fixed, the issue was usually closed for good. It was a contest of individuals with limited tools, working at human speed on a small, knowable landscape. The scale of the problem was still manageable by hand. And when breaches did occur, their impact was often limited because life and business did not depend on the Internet nearly as much as they do today.

Eventually adversaries began writing simple scripts to scan for and exploit exposed systems faster than any human could manage, giving them the upper hand. That marked the beginning of Epoch 2, repeatable on the offensive side. In response, defenders adopted default-deny network firewalls to block many of the ports attackers had been using, implemented network segmentation, deployed the first generation of vulnerability scanners such as SATAN and SAINT, and they began automating patch management.

Organizations that embraced these tools moved into Epoch 2 (Repeatable) on defense, where their capabilities met or in many cases exceeded those of their adversaries. Those who clung to the old manual ways of working in Epoch 1 were at a severe disadvantage. If they were hacked, it was often because they simply could not keep up. If they were not, it was more a matter of luck than design.

Looking back, that period taught us one of the most enduring lessons in cybersecurity: whenever attackers raise their game, defenders must adapt just as quickly or be left behind. The progression from manual effort to repeatable processes was the first major shift in that ongoing race. It would not be the last.

Epoch 2 (Repeatable)

During the period leading up to 2000 and beyond, technology advanced and was adopted extremely rapidly. The diversity of systems, platforms, and applications running on the Internet expanded dramatically, and with it the number of vulnerabilities. It was no longer easy or even realistically possible for anyone to know them all.

To take advantage of this expanding attack surface, new tools emerged. Programs such as Autohack, SATAN, Metasploit, and Nessus cataloged vulnerabilities and automated attempts to exploit them against chosen targets. This approach was effective because there were so many exposed systems online. Epoch 2 then could be characterized as repeatable. All of the systems that were created had to work the same way each time, because they were codified, but target selection was manual.

On the defensive side, the vulnerability management industry rose to meet the challenge. As more examples of Epoch 2, security vendors such as Tenable, Qualys, and Rapid7 entered the market to help organizations discover and remediate vulnerabilities. In some cases, they even acquired offensive tools and integrated them into their platforms. At the time, this approach worked because most organizations had relatively small and well-defined infrastructures. This period marked the solidification of Epoch 2 in the cybersecurity arms race: a shift from purely manual efforts to broadly repeatable tools and processes on both sides. For an adversary, it became much harder to compromise targets using the same level of Epoch tools as the defenders in network security, and nothing yet reached Epoch 3.

Remember, adversaries are sentient and they have options: they can quit entirely, focus on weaker targets, innovate to overcome defenses, or shift to a different attack vector. As the Internet continued to expand, new types of attack surfaces emerged, and adversaries adapted to exploit them. This constant cycle of choice and response once again tipped the balance, pushing the industry into the next epoch of the conflict.

PREDICTION: WEB APPLICATION SECURITY

By the early to mid-2000s, just about every organization had a website, or often dozens or hundreds. Collectively, there were millions of websites across the Internet. Every bank, retailer, hospital, and government agency had put something online, along with the IT infrastructure to support it. To protect the mission-critical networks, almost every organization had deployed a firewall. The typical configuration blocked all inbound ports except 80 and 443, which had to remain open for the web to function.

As it’s been often said, a firewall is a network’s answer to a software security problem. That became clear as new classes of web application attacks began to appear. SQL Injection, first written about in 1998, and Cross-Site Scripting, which emerged around the same time, exposed how vulnerable web applications really were. With little more than a web browser, attackers could exploit weaknesses in the software running those sites. I remember seeing this shift coming. Because of perimeter firewalls, a few of us realized that adversaries would inevitably move up the stack and focus their efforts on the web application layer, where many defenses were simply nonexistent. The guess was right, and that insight led me to found WhiteHat Security in 2001.

By around 2005 attacks on web applications began to take off. Most were carried out with simple, but effective, Epoch 1-level scripts. Defenders responded by turning their attention to web applications, moving away from general manual penetration testing. At first the work was still mostly manual. It was slow, painstaking, and supported by a handful of early tools and scripts. It didn’t scale, but it was the best we had at the time.

Just predicting that web application security would be the next big battleground wasn’t enough. And simply finding vulnerabilities in a few dozen websites a year wasn’t going to make a dent in the problem. Thousands of companies were exposed, and there were hundreds of thousands of websites that changed constantly. A once-a-year assessment was basically useless. If an adversary couldn’t find and exploit a vulnerability on one of your websites, they could come back a week or month later, and test for new flaws introduced after multiple code changes.

Even automation, such as application vulnerability scanners,couldn’t solve it. The hard truth was that some of the most dangerous classes of vulnerabilities, often the high-severity, easy-to-exploit ones, still required human expertise to find. That was just the reality of computer science. The problem wasn’t only about scale; it was about depth and nuance.

Companies could buy a scanner, but most didn’t have the in-house expertise to run it effectively. Skilled web security experts were rare and in high demand. So we built WhiteHat to fill that gap, to deliver not just technology but the expert human insight needed to make it work at scale. That combination changed the game. 

WhiteHat’s answer, the answer, was to rethink the whole approach. We created the first Software-as-a-Service platform for application security, combining people, process, and scanning into a single continuous vulnerability discovery solution. Provided the vulnerabilities identified were fixed, we moved defense on web applications from Epoch 2 to Epoch 3, meeting and often far exceeding the capabilities of the adversary. 

PREDICTION: RANSOMWARE AND MACHINE LEARNING ENDPOINT DETECTION AND RESPONSE

For most of the mid-2000s through about the mid-2010s, something remarkable was true in the security community. While almost every security expert told friends, family, and businesses to install antivirus software, most of us working in the industry did not use it ourselves. If you asked why, the answer was usually the same: antivirus did not really work, it slowed down the computer, and we knew enough to avoid clicking on suspicious links or attachments. It was hard to argue with that logic, and to be fair, we were mostly right.

At that time, antivirus software relied on signatures. A virus, or rather malware, would appear in the wild, someone would detect it, a signature would be written and pushed out in an update, and the cycle would repeat. At first a few signatures were added daily, then dozens per day, then hundreds, and eventually thousands. The number of signatures grew exponentially, as did the workload on both the software and the global network of analysts who collected and distributed them. Meanwhile, the effort required to create a new piece of malware that could evade signature-based defenses stayed about the same. From that imbalance alone, it was clear to many of us that the signature model could not last. Eventually it would be overwhelmed.

Then I came across a couple of reports that stopped me cold. The first was an LA Times article that said, “The FBI recently published that ransomware victims paid out $209 million in Q1 2016 compared to $24 million for all of 2015.” The second, from Business Insider, reported that “In its letter, the DHS noted that its National Cybersecurity and Communications Integration Center (NCCIC) had initiated or received 321 reports of ransomware-related activity affecting 29 different federal agencies since June 2015. The 321 reports include attempted infections and infections that were dealt with by the agencies’ internal security teams.”

For some reason ransomware payments, ransomware being a form of malware, were suddenly spiking. Ransomware as an idea had existed for years, but it had never taken hold largely because attackers lacked a reliable way to get paid anonymously at scale. Something in the ecosystem had changed that made it not only possible, but scalable. The answer was obvious: Bitcoin. With Bitcoin, adversaries suddenly had a global, unregulated digital currency they could use to monetize infections by demanding ransom for the return of data or system access.

The stage was set. I knew this was just the beginning, ransomware was coming, it was going to be bad, and very few within cybersecurity were paying attention to this form of attack.

The antivirus industry had been stuck for years in Epoch 2, so there were already a large number of endpoints compromised by run-of-the-mill malware, and there was no effective defense available. Meanwhile, adversaries were already in Epoch 3 with signature evasion, and now they had a scalable way to get paid using Bitcoin. They did not need to find new software targets, they already had millions of infections formed into botnets. 

When I was first introduced to SentinelOne in 2016, I recognized it as the beginning of a new chapter. Theirtechnology, rather than relying on the broken signature model, used machine learning to distinguish malware from benign software by its behavior. I joined the company because I believed in their team and this was the only viable way forward.

Ransomware, like traditional kidnapping for ransom, both thrive when there was a way to profit and a lack of effective deterrence. As ransomware inevitably took hold, which it did, the entire industry had to confront the reality that legacy antivirus solutions were obsolete. The world would have to shift to machine-learning-based defenses. And the rest is history.

Epoch 3 (Scale)

Epoch 3 is defined by scale. For adversaries, it meant launching vast numbers of attacks without worrying about any single target. For defenders, it meant trying to scan and prioritize far more issues across much larger and more complex environments.

Epoch 3 for vulnerability management began to emerge around the time people started to worry about IPv4 address exhaustion. The Internet was no longer sparse; it had become densely populated. Infrastructures were diverse and scattered, even if many still relied on a similar handful of perimeter vendors. At the same time, bandwidth became abundant, and compromised machines could be rented or bought for very little, giving attackers unprecedented reach.

Attackers realized they no longer had to focus on individual targets or build massive collections of exploits. Instead, they could take just a few vulnerabilities and sweep the entire Internet, probing millions of systems to find whoever was still exposed. The economics shifted in their favor. A small investment of effort could yield an enormous attack surface.

You might expect that such a shift would force defenders to evolve into Epoch 3 as well. But that has not happened. In many ways, defenders remain stuck in Epoch 2. Scanning for all known vulnerabilities at true Internet scale is still too expensive for even the largest enterprises and too slow, often taking days or weeks to complete a single pass. The imbalance is clear. While attackers exploit a handful of vulnerabilities at massive scale, defenders struggle to even see their full attack surface. 

There are at least two key challenges in defenders reaching Epoch 3 in vulnerability management. This first is attack surface management, which is to say having an up to date and complete asset inventory. For many years many innovators tried to create automated tools to solve this problem, which proved deceptively challenging. The second is vulnerability prioritization, specifically focusing on just finding the ones that actually lead to breach and financial loss. And first we have to figure out which those are exactly. Doing both simultaneously at a cost and speed that finally brings defenders into parity with attackers. 

PREDICTION: ATTACK SURFACE MANAGEMENT

For years, the story behind breach after breach was the same. An organization failed to patch a system that was vulnerable to a well-known exploit. The headlines were predictable: they should have known better. And often, that was true. The system was not patched, potentially for a variety of organizational reasons. But, something about the same continuing narrative didn’t feel like the whole story.

Then in 2017, Equifax was breached and it cost them $1.4 billion. Later, in 2019, an FTC complaint following the investigation revealed something particularly interesting. The investigation showed that the initial point of compromise was an Internet-facing website that had not been patched. Of course. Yet the same patch had been deployed elsewhere within the company. So, why wasn’t that one website patched as well?

“Although many companies use automated vulnerability scanners, Defendant (1) did not maintain an accurate inventory of public facing technology assets running Apache Struts (and therefore did not know where the scanner needed to run) and (2) relied on a scanner that was not configured to search through all potentially vulnerable public facing websites.”

The problem was not necessarily that the patch wasn’t applied, though that was a problem, but that the vulnerable website was not part of Equifax’s vulnerability management program. Presumably, if the website had been known to Equifax, it would have been scanned, patched, and protected, and there would have been no breach and no $1.4 billion loss.

Following that incident, I began speaking with many incident responders across the industry, and they shared something striking. In roughly a quarter to a third of the cases they handle, the root issue was the same. The vulnerable asset was simply unknown to the organization. The adversaries would compromise something, anything on the perimeter, and then begin moving laterally across the network. 

Separately, I also spoke with a lot of bug bounty hunters and the managers of bug bounty programs to learn more about what techniques worked in practice. One of the most effective techniques they described was the use of asset discovery tools. By identifying assets that target organizations did not know they owned and that other bounty hunters had overlooked, they could often find exploitable vulnerabilities and claim rewards. This was a telling insight. If an asset was unknown and unmanaged, it was often unpatched and untested, which made it a reliable source of bounties.

This also revealed something important about the adversary. Whether bounty hunters or cybercriminals, they had evolved. They had moved from Epoch 1 to Epoch 2 in their ability to identify unknown attack surfaces in a repeatable fashion, and in many respects surpassing the capabilities of most defenders. 

For years, we had known there was a problem with the lack of effective attack surface management, but it became undeniable around 2016 and 2017. By then, patch management had matured to Epoch 2, so adversaries were forced to innovate. They shifted their efforts to finding unknown assets, exploiting what defenders could not see. It was only a matter of time.

That realization led Robert Hansen, Lex Arquette, Heather Konold, and myself to found Bit Discovery. Our mission was clear: to move attack surface management into Epoch 3, which is all about scale. Our goal was to automate mapping any organization’s Internet-facing attack surface in minutes at a cost that made it practical enough to integrate into existing vulnerability management platforms. Scale had to be assured because many large organizations did indeed have millions of assets, and some of them changed frequently. That capability would allow defenders to take full advantage of their data and finally close the gap.

In just three years, Bit Discovery became a leading force in scalable attack surface management and was ultimately acquired, proving the value of solving this critical problem.

Epoch 4 (Complexity)

Epoch 4 is all about complexity. In this phase, adversaries stop relying solely on scale and start layering on sophistication. They combine speed and reach with new techniques that allow them to find and exploit weaknesses that defenders never even knew existed. This changes the game. Defenders can no longer hope to keep up by patching everything, because by the time they do, it is already too late. Instead, they have to focus on making attacks fail in real time or at least limit the damage when they succeed.

In vulnerability management, we are not there yet, but we can see it coming. For now, attackers still rely on scale because there is plenty of low-hanging fruit. Many organizations remain exposed to well-known vulnerabilities that can be discovered with a sweep across the Internet. But that will not last forever. Eventually, as defenders reduce those easy opportunities, scale alone will not be enough.

At that point, attackers will have to innovate again. They will revisit the creativity of Epoch 1 but combine it with the reach of Epoch 3. That is a dangerous combination. The most likely path is through the use of technologies such as machine learning and large language models that can help discover or even generate new vulnerabilities. Imagine a system that can continuously create fresh vulnerabilities and then exploit them across the entire Internet. That is what Epoch 4 could look like.

In that future, patch management will simply not be able to keep up. The sheer volume and novelty of vulnerabilities will overwhelm even the best-resourced organizations. Defenders will have to rethink their strategy, focusing less on chasing and fixing every single vulnerability and more on preventing attacks from succeeding in the first place or at least containing them before they cause real harm. We will need defenses that can stop attacks in real time, constrain their reach, and reduce the blast radius when breaches occur.

Epoch 4 is not fully here yet, but it is on the horizon. If we wait until it arrives to change how we defend, it will be too late. The shift from patch-and-prevent to contain-and-constrain is coming, and the sooner we start preparing for it, the better chance we have to keep the upper hand.

Conclusion

I do not claim that the Epoch Theory explains every nuance of every cybersecurity challenge. No single framework can capture the entire complexity of this field. But over the last twenty-plus years, we have found that viewing the industry through this lens consistently helps explain why things happen the way they do — and, just as importantly, where they are likely to go next.

This perspective has guided my work and is a big part of why I have often been ahead of the market. We were able to see the shifts before they fully arrived — from the rise of application security, to the surge of ransomware, to the need for attack surface management because we understood that adversaries always pursue what is most profitable and least costly, and that each epoch forces them to change when the economics no longer favor their current approach.

While much of this write-up has focused on network and vulnerability management, the same dynamics play out across many other verticals. In web application security, for example, the community as a whole eventually won — raising the cost of exploitation enough that adversaries shifted their attention elsewhere. We saw something similar with Arkose Labs, which forced attackers engaged in authentication fraud and credential-stuffing to move on, delivering lasting protection for their clients. These victories are proof that defenders can change the game when they understand the epoch they are in and work to push adversaries into the next one.

Our hope is that the Epoch Theory helps more people understand that the industry does not evolve randomly. It follows patterns shaped by incentives, economics, and innovation. Recognizing those patterns is how we can anticipate, prepare, and even drive the next shift rather than merely reacting to it.

Stay Tuned For More

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.