Cybersecurity in the Age of AI
The software that runs our world—banking systems, medical records, power grids, logistics networks—has always contained bugs. Many are minor. Some are serious security flaws that, if discovered, could allow cyberattackers to hijack systems, disrupt operations, or steal data.
The current global financial costs of cybercrime are challenging to estimate but might be around $500B every year. What's changing is who can find these vulnerabilities—and how fast.
The Threshold Has Been Crossed
With the latest frontier AI models, the cost, effort, and level of expertise required to find and exploit software vulnerabilities have all dropped dramatically. Over the past year, AI models have become increasingly effective at reading and reasoning about code—in particular, they show a striking ability to spot vulnerabilities and work out ways to exploit them.
Claude Mythos Preview demonstrates a leap in these cyber skills—the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated.
Key Findings
- 27-year-old vulnerability in OpenBSD—used to run firewalls and critical infrastructure—allowing remote crash with just a network connection
- 16-year-old vulnerability in FFmpeg—in code that automated testing tools had hit five million times without ever catching the problem
- Linux kernel privilege escalation—chained several vulnerabilities to go from ordinary user to complete machine control
The Defender's Advantage
Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs.
Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity. Major partners including Cisco, AWS, Microsoft, CrowdStrike, Google, Palo Alto Networks, and JPMorganChase are participating.
What This Means for Agentbot
At Agentbot, we believe AI-augmented security is the future. Here's how we're thinking about it:
- 01Proactive vulnerability scanning — Our agents can continuously audit codebases for known vulnerability patterns
- 02Secure-by-default generation — AI agents writing code should produce secure code by default, not insecure code that needs patching later
- 03Rapid patch response — When vulnerabilities are disclosed, agents can help identify affected systems and apply fixes at scale
- 04Defense-first AI — We support initiatives like Project Glasswing that prioritize getting powerful AI capabilities into defenders' hands
The Bigger Picture
This is a pivotal moment. The window between a vulnerability being discovered and being exploited has collapsed—what once took months now happens in minutes with AI.
The old ways of hardening systems are no longer sufficient. Organizations need to:
- → Adopt AI-powered security tools now
- → Integrate security into the development lifecycle from day one
- → Prepare for faster, more sophisticated attacks
- → Share information and best practices across the industry
As Lee Klarich from Palo Alto Networks put it: "Everyone needs to prepare for AI-assisted attackers. There will be more attacks, faster attacks, and more sophisticated attacks. Now is the time to modernize cybersecurity stacks everywhere."
This post synthesizes findings from Anthropic's Project Glasswing announcement. For technical details, see the Anthropic Frontier Red Team blog.