April 22, 2026
Robert "RSnake" Hansen

I've spent enough time dissecting how adversaries work to recognize the patterns that actually lead to breaches, and the recent wave of AI-assisted threats reveals a fundamental mismatch between how these tools are trained and what real adversaries require for success. The hype around AI-powered polymorphic malware has driven developers to impose excessive guardrails, imagining shape-shifting code that evades detection effortlessly, yet the reality on the ground tells a different story.
Marcus Hutchins, known as MalwareTech, captured this issue perfectly in a recent set of messages he posted. He pointed out that the AI-generated samples he's encountered often recycle techniques straight out of 2009, straightforward enough that they practically invite automated defenses without any need for custom signatures. It's as if the models are echoing outdated playbooks rather than innovating for stealth.
Consider how these AI systems learn. They draw heavily from public datasets, academic papers, and security blogs, where researchers describe threats in exhaustive detail, including the noisy reconnaissance steps that once defined early intrusions. Think about it, that’s how we used to talk, right? Get in, start port scanning, find the next port that’s visible/exploitable, and keep hacking. Sound familiar?
Port scanning is a prime example of where LLMs diverge from what hackers actually do once they break in. Port scanning lights up logs and alerts defenders immediately. According to former Mandiant practitioners I know, state-sponsored actors have largely abandoned this approach once inside a network in favor of subtler methods. Instead, they rely on passive enumeration through existing access points or use trusted credentials to map environments without raising alarms. An AI trained on researcher narratives, however, will gravitate toward port scanning as a default, treating it as an essential first move because that's how the literature frames initial access. In a controlled lab, this might simulate a breach convincingly, but deploy it against a live enterprise, and it fails more often than not to make it a dangerous technique for adversaries.
This disconnect extends to the stylistic quirks that betray AI involvement. Hutchins highlighted one particularly telling case from a nation-state advanced persistent threat group, where the malware incorporated egregious emoji use. Think about that. Emojis! Emojis are a stylistic flourish more suited to a social media post than a covert operation. Successful operators prioritize invisibility, crafting payloads that blend into legitimate traffic or mimic benign system behaviors, drawing from years of operational feedback rather than generative prompts. This is clearly the B team.
Off-the-shelf AI tools pull from diverse internet scraps, injecting irrelevant flair that stands out to any seasoned analyst or analysis tooling. The result is malware that's detectable not despite the AI, but because of it, handing defenders an easy win where sophistication and stealth should win the day.
In practice, this means adversaries who lean on commercial AI solutions face an uphill battle against their more disciplined counterparts. Those who handcraft campaigns based on insider knowledge of target environments, honed through repeated engagements and post-exploitation reviews, will outmaneuver the automated outputs of their AI-reliant counterparts. The AI variants might snag low-hanging fruit in permissive setups or against unprepared small organizations, but they falter in high-stakes intrusions where noise tolerance is zero.
Ultimately, the use of AI in malware shows the gaps between simulated threats and real-world efficacy. I, for one, welcome the use of AI in malware, because apparently it’s pretty crappy, and low efficacy! If more bad guys want to get caught, that’s fine by me! 😂 While AI could theoretically evolve to match adversary needs, its current training data is built on techniques and behaviors that are already widely known, the very things most likely to get an adversary caught. In short, the most dangerous thing about AI-generated malware right now may be the confidence it gives attackers who don't realize their shiny new tool is essentially a very fast way to do something stupid.