Anthropic has kicked off a new, industry-wide push to lock down the software that keeps modern life running, and it’s doing it with one of the most capable – and potentially dangerous – AI models ever built at its core. The effort is called Project Glasswing, and it’s Anthropic’s attempt to make sure AI’s growing cyber skills are used to protect critical systems before attackers can weaponize the same capabilities at scale.
Project Glasswing is a coalition of some of the biggest names in tech and finance: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks have all signed on as launch partners. These companies collectively sit behind everything from cloud data centers and smartphones to payment networks, corporate firewalls, and the open-source projects that quietly power most of the internet, which is why Anthropic frames this as a move to “secure the world’s most critical software,” not just another AI pilot.
At the center of Glasswing is Claude Mythos Preview, an unreleased “frontier” model that Anthropic describes as a kind of AI super-sleuth for code. In internal and partner tests, Mythos has autonomously discovered thousands of previously unknown – “zero-day” – vulnerabilities across every major operating system and web browser, often in code that’s been in production for years and has already survived both human review and automated testing. Anthropic says Mythos outperforms its previous flagship model, Claude Opus 4.6, on hardcore security and coding benchmarks like CyberGym and SWE-bench, underscoring how far AI has moved from autocomplete to something closer to an autonomous security engineer that can read, reason about, and actively exploit complex systems.
That dual nature is the reason Glasswing exists. The same skills that let Mythos chain together Linux kernel bugs into a full system takeover, or revive a 27‑year‑old OpenBSD flaw that can remotely crash machines, could be catastrophic in the wrong hands. Anthropic’s system card and early analysis from outside observers make it clear: Mythos is capable of end‑to‑end cyberattacks on small enterprise networks with weak defenses, and that’s precisely why Anthropic is not making it generally available and is instead placing it behind a controlled program like Glasswing. In other words, the company is trying to get ahead of the curve by seeding powerful AI cyber capabilities with defenders first, before similar models are widely available to attackers.
So what does Project Glasswing actually do in practice? For one, it gives participating companies access to Mythos Preview specifically for defensive work: scanning internal and customer-facing codebases, running black-box tests against binaries, probing endpoints, and stress-testing infrastructure via automated penetration testing. Anthropic says many of the launch partners have already spent weeks running Mythos over their own systems, and early comments from security chiefs at Cisco, AWS, Microsoft, CrowdStrike, and Palo Alto Networks all echo the same idea: AI has shortened the time from a vulnerability being discovered to it being exploited from months to minutes, and defenders have no choice but to fight automation with automation.
Cisco’s security and trust chief calls this a “threshold moment” where the old ways of hardening systems are no longer enough, and argues that AI‑assisted security needs to become a default part of how tech vendors ship products. AWS highlights how it already analyzes hundreds of trillions of network flows a day and is weaving models like Mythos into that pipeline to spot weaknesses earlier in the stack, from custom silicon to application code. Microsoft, meanwhile, points to its own open‑source benchmarks showing Mythos’ significant jump over prior models on real-world security tasks, framing Glasswing as a way to test and deploy these tools across critical infrastructure “on our own terms and alongside respected technology leaders.”
A particularly important angle – and one that’s easy to miss if you only look at the big-brand logos – is open source. The Linux Foundation notes that open-source software makes up the vast majority of modern code, yet many core projects are maintained by tiny teams who cannot afford dedicated security staff. Project Glasswing tries to rebalance that by giving maintainers access to Mythos through programs like “Claude for Open Source,” effectively turning an elite-level security engineer into a reusable tool for anyone responsible for a critical project. Anthropic is backing that up with real money: up to $100 million in usage credits for Mythos Preview and $4 million in direct donations to open-source security initiatives, including Alpha-Omega, OpenSSF via the Linux Foundation, and the Apache Software Foundation.
From a policy and national-security perspective, Anthropic is also positioning Glasswing as part of a broader race between democratic states and hostile actors to control the most powerful AI cyber tools. The company says it is already in discussions with US government officials about Mythos’ offensive and defensive capabilities, stressing that securing critical infrastructure – power grids, healthcare systems, financial networks, transportation, and government agencies – is now inseparable from how fast frontier AI is advancing. The concern is that state-sponsored groups in countries like China, Russia, Iran, and North Korea have already been aggressively targeting such systems, and AI models that can autonomously uncover and exploit obscure bugs could dramatically scale up the frequency and impact of these attacks if not carefully controlled.
The vision for Project Glasswing stretches beyond a one-off initiative. Anthropic says the program is a “starting point” that’s expected to run for many months, with partners sharing best practices with each other where possible and a public report promised within 90 days outlining what was fixed and what others can learn. On the technical side, Anthropic plans to continue maturing safeguards that can recognize and block dangerous outputs from Mythos-class models, with the idea that future Claude Opus releases will inherit stronger cyber safety tooling before anything like Mythos is made more broadly available. Longer term, the company argues that industry and governments should aim for an independent third-party body – a kind of neutral clearinghouse – to coordinate large-scale cybersecurity projects in the AI era and set shared standards around vulnerability disclosure, patch pipelines, secure-by-design practices, and automation in triage and patching.
There is also a clear commercial subtext. Mythos Preview is explicitly not a generally-available product, but Anthropic has already set pricing for participating organizations after the subsidized research phase: $25 per million input tokens and $125 per million output tokens, through the Claude API as well as platforms like Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft’s Foundry program. That signals where the market is headed: a new category of AI systems designed not just for generic reasoning or chat, but for deep, agentic interaction with code and infrastructure, with cybersecurity as one of the first “must-have” use cases.
For everyday users and businesses, the details of benchmarks like CyberGym and SWE-bench may feel abstract, but the stakes are not. The software that keeps banks online, keeps planes flying, coordinates supply chains, and stores patient records is already under constant attack, and that pressure is only increasing as cybercriminals and states alike experiment with AI tools. Project Glasswing is an early attempt to flip the script: to use frontier AI to find and fix the vulnerabilities that have been hiding in plain sight, and to push the balance of power – at least for now – back toward defenders.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
