Earlier this month, Anthropic announced Claude Mythos Preview, a general-purpose model capable of discovering software vulnerabilities at a scale never seen before. In fact, it surfaced a software vulnerability that had gone unnoticed for 27 years. Days later, Mozilla shipped Firefox 150 with fixes for 271 vulnerabilities Mythos identified in a single evaluation pass, and made the case that the era of zero-days has an expiration date, because defenders will start finding bugs before attackers can.
That’s a remarkable claim, and it captures both the promise and the pressure of this moment. For years, machine learning and now LLMs have been getting incrementally better at discovering software vulnerabilities. Mythos isn’t a break from that trend. It’s confirmation of it. What’s surprising isn’t that AI found long-dormant bugs. It’s how quickly the step change arrived, and how clearly it signals where the next few years are headed.
The trend won’t slow down
Once capabilities like this become broadly available (through open models, replication, or parallel innovation), both sides of the security equation get faster. Defenders and vendors will use these tools to find and patch vulnerabilities at unprecedented speed. Attackers will use the same tools to find and weaponize them.
The downstream effect is a release stream that looks very different from today’s. Expect more software and OS updates to arrive more frequently, addressing vulnerabilities that would have taken months or years to surface under the old model. Firefox 150 is a preview of what that looks like in practice.
That’s good news for the long arc of security. But it creates an immediate problem for the organizations that have to consume those updates today.
The pressure shifts to ITOps
If patches are arriving faster, and attackers are weaponizing vulnerabilities faster, the gap between disclosure and exploitation collapses. Organizations that take days or weeks to roll out updates will be exposed in ways they weren’t before. Patching speed stops being a hygiene metric and becomes a survival metric.
There’s also a second-order problem that often gets missed in discussions of AI-driven vulnerability discovery, and it’s arguably the more urgent one: The same capabilities that let defenders find new bugs let attackers turn existing patches into exploits. When a vendor ships a fix, the patch itself is a description of the vulnerability, or a roadmap for anyone capable of reading it. Anthropic’s own research showed Mythos turning known CVEs into working privilege escalation exploits in under a day, with no human intervention.
The implication is uncomfortable. Even before AI accelerates the discovery of new zero-days, it’s already collapsing the window on N-days (vulnerabilities that have been disclosed and patched but remain exploitable on every system that hasn’t updated). According to Mandiant’s M-Trends 2026 report, the mean time to exploit a vulnerability has dropped to an estimated negative seven days, meaning exploitation is now routinely occurring before a patch is even released. AI widens the gap between both ends. The faster vendors patch, the faster attackers can reverse-engineer those patches into weapons aimed at everyone who hasn’t yet deployed them.
This reframes the patching mandate. It isn’t just about staying ahead of unknown threats. It’s about not being the easiest target for the well-known ones.
But speed without judgment is its own risk. Anyone who has lived through a bad patch knows that pushing updates aggressively across an environment can take down critical business operations as effectively as any attacker. The answer isn’t to patch everything the moment it lands. It’s to patch quickly and confidently, knowing what’s exploitable in your environment, knowing which patches are safe to deploy, and having the automation in place to act on that knowledge without waiting on manual review for every change.
What this looks like in practice
This is where the work NinjaOne is doing on AI-driven vulnerability and patch management matters. Three capabilities address the speed-plus-safety problem directly:
- Accurate software inventory is the foundation. AI analyzes endpoint telemetry to identify products, versions, and dependencies across devices, normalizing inconsistent naming, so the same application is recognized everywhere it lives. Without this, vulnerability matching is guesswork.
- Real-time vulnerability identification builds on that inventory. As new CVEs are disclosed, AI continuously correlates them against live endpoint data, surfacing impacted assets within minutes rather than during the next scheduled scan.
- Patch Intelligence AI closes the loop. Not every patch is safe to deploy immediately, and not every vulnerability carries equal risk. By evaluating patch quality and deployment risk using vendor signals, community feedback, and real-world deployment data, IT teams can move quickly on the patches that matter while avoiding the ones likely to cause operational harm.
Together, these capabilities let teams compress the window between vulnerability disclosure and remediation without flying blind.
From awareness to execution
Vulnerability management and patch management have always been treated as adjacent disciplines. In the world Mythos points toward, they have to operate as a single closed loop. Real-time visibility into what’s exposed, paired with autonomous, risk-aware patching that moves at the speed of the threat.
The organizations that will navigate this well aren’t the ones with the most scanners or the longest patch backlogs. They’re the ones that can move from awareness to execution faster than attackers, with enough confidence in their tooling to do it autonomously and enough intelligence to do it safely.
The next few years are going to look different. The volume of patches will increase. The window to deploy them will compress. The stakes for getting it wrong, in either direction, will rise. A patch that exists in a vendor’s release notes does nothing for you until it’s deployed safely across your environment.
Mythos and the models that follow will reshape the speed and severity of cyber threats. That much is clear. The organizations that stay ahead won’t be the ones with the most scanners or the biggest security teams. They’ll be the ones whose IT operations can match the speed of the threat without sacrificing the stability of the business.
