Anthropic built an AI so powerful they won't let anyone use it
It found flaws in every web browser and operating system on earth. Then the Fed called an emergency meeting.
Imagine you run a company that makes one of the most advanced AI systems in the world. You build your next big model. It works. It works really well. And then you look at what it can do and decide, actually, no, we cannot release this to the public.
That is exactly what happened at Anthropic earlier this month.
The model is called Claude Mythos. It was announced on April 7th, and within three days the US Treasury Secretary and the Chair of the Federal Reserve were in an emergency meeting with the CEOs of America’s biggest banks, talking about what this thing could do to the financial system. Cybersecurity company stocks dropped by double digits. The UK government issued a formal warning. Canada’s AI Minister booked a meeting with Anthropic.
Let me back up, because this is one of the strangest AI stories of the year and it is worth understanding properly.
What is Mythos, in plain English
Anthropic makes Claude, which you have probably used or at least heard of. It is a competitor to ChatGPT. Every so often they release a more powerful version of Claude, and each one can do more impressive stuff than the last.
Mythos is the newest and most powerful one. But here is the twist: Anthropic was not trying to build a cybersecurity tool. They were trying to build a better general-purpose model, the kind of thing you would use to write code, answer questions, or help with work. What they got back was something that accidentally turned out to be terrifyingly good at finding security flaws in software.
Think of it like this. Imagine you hired a contractor to build you a better kitchen. You come home and the kitchen is great, but you also notice the contractor, as a side effect of being really good at their job, has become the world’s best lock-picker. You did not ask for that. But now it exists, and you have to decide what to do with it.
That is roughly where Anthropic found themselves.
Why security flaws are such a big deal
Here is a quick crash course, because this is the bit that makes the whole story make sense.
Every piece of software you use, from your phone’s operating system to your web browser to your bank’s website, contains bugs. Some of those bugs are harmless. Some of them are security vulnerabilities, which are basically unlocked doors that a hacker can walk through to steal your data, install ransomware, or take control of your computer.
The most valuable kind of vulnerability is called a zero-day. The name comes from the fact that when the bug is discovered, the software’s developers have had zero days to fix it. Nobody knows the flaw exists yet, which means there is no patch, no defence, and no warning. It is an open door that nobody knows is open.
Zero-days are insanely valuable. Governments and intelligence agencies pay hundreds of thousands to millions of dollars for a single good one. Finding them usually takes elite security researchers weeks or months of painstaking work.
Anthropic used Mythos to find thousands of zero-days in a few weeks.
Not thousands across some obscure software. Thousands across every major operating system and every major web browser on the planet. Windows. macOS. Linux. Chrome. Safari. Firefox. The stuff billions of people use every single day. Some of the flaws the AI found had been sitting in production code for decades, including a 27-year-old bug in OpenBSD and a 17-year-old bug in the plumbing that powers large parts of the internet.
A model that was not designed to do this, built by a company that was actively not trying to make it happen, produced a pile of digital skeleton keys that any sufficiently motivated bad actor would pay a fortune for.
You can see why this made people nervous.
Why Wall Street lost its mind
The financial reaction was swift and ugly.
In the three trading sessions after the news broke, shares of some of the world’s biggest cybersecurity companies got battered. According to MarketScreener, Palo Alto Networks dropped around 12 percent, Akamai fell 20 percent, Fortinet lost 8 percent, and CrowdStrike shed 11 percent. Billions of dollars of value, gone in days.
But here is the thing. The panic was not really about Mythos itself. It was about what Mythos implies.
The entire cybersecurity industry is built on the idea that attackers need skill, time, and expertise to find these flaws, and that defenders can buy tools and services that keep pace. That was a fair fight, more or less. Mythos suggests the fight is about to become very unfair, very fast. CrowdStrike’s own Chief Technology Officer said it out loud in Anthropic’s announcement: the window between a vulnerability being discovered and being exploited by bad actors has collapsed. What used to take months now happens in minutes.
If a general-purpose AI can autonomously find critical bugs in every browser and every operating system, the obvious next question is the one that kept Wall Street up at night. What can it find in a bank?
The emergency meeting nobody was supposed to know about
On the same day Anthropic announced Mythos, Bloomberg reported that Treasury Secretary Scott Bessent and Fed Chair Jerome Powell pulled together a meeting with the heads of Wall Street’s biggest banks. Brian Moynihan from Bank of America was there. David Solomon from Goldman Sachs was there. Jamie Dimon from JPMorgan could not make it in person but JPMorgan was already signed up as a partner.
The detail that tells you how seriously this was being taken is Powell. The Fed Chair does not usually show up to cybersecurity briefings. His attendance, according to people familiar with the meeting, meant Mythos was being treated not as a tech issue but as a systemic risk to the financial system. That phrase has a very specific meaning. It is what regulators call something that could blow up the whole economy if it goes wrong.
The ripple effect kept going. The Bank of England’s Cross Market Operational Resilience Group (a mouthful that basically means “the team responsible for making sure UK banks do not fall over”) scheduled briefings with major UK banks, insurers, and exchanges. The Bank of Canada held emergency sessions. The UK’s AI Security Institute issued a formal warning confirming Mythos was a meaningful step up in what AI can do in this space.
You do not call these meetings for a product launch. You call them when you think something could genuinely break.
Anthropic’s compromise: Project Glasswing
So what did Anthropic actually do with Mythos? They took what I think is a pretty thoughtful middle path, though reasonable people can disagree about whether it goes far enough.
They called it Project Glasswing. The idea is to give Mythos only to a small group of organisations that need to defend critical software, so the defenders get a head start before anyone with bad intentions catches up.
The group includes Amazon, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks, JPMorgan Chase, and the Linux Foundation. Roughly 50 organisations in total. Anthropic put up $100 million in credits to use the model for finding and fixing bugs, plus $4 million in donations to open-source security projects.
The logic is straightforward. If this capability is going to exist in the world, Anthropic would rather the people defending critical infrastructure get it first, not the people attacking it. Give them a few months of head start to patch everything before less cautious labs start shipping similar models.
It is a reasonable bet. It might even work. But it depends entirely on how long that head start lasts.
The part nobody is quite saying out loud
Here is what I keep coming back to, and it is the bit worth sitting with.
Mythos is almost certainly not the most powerful offensive AI system in existence. It is simply the most powerful one we know about, because Anthropic is the kind of company that tells us. Governments have been pouring money into AI for offensive cyber operations for years. If a company actively trying to avoid building this capability ended up building it anyway, it is very reasonable to assume the people trying very hard to build it have something at least as good. They just are not holding press conferences about it.
So the Mythos announcement is not really news about a model. It is Anthropic saying the quiet part loud enough that Treasury had to pay attention. A forced conversation about where AI capability is actually sitting right now versus where everyone assumed it was.
That is the thing to take away from this story.
What any of this means for you
Let’s bring it back to earth, because you probably do not run a bank or a cybersecurity company. Here is how I would think about it depending on where you sit.
If you work in tech or security: Your job is going to change this year, not in three years. Tools built around humans finding bugs at human speed are about to feel very old. The people who come out ahead are the ones who start pairing with these AI models now, not the ones waiting for their vendor to wrap something up in a nice box.
If you run a business or build products: Your digital attack surface just got bigger. Every piece of software in your stack, including the well-known open-source stuff, might contain flaws nobody has found yet but an AI could. Keeping software updated matters more than it used to. Knowing what your dependencies actually are matters more than it used to. Boring hygiene stuff just became strategic.
If you invest in cybersecurity stocks: The old story was that proprietary threat data was a moat. It was not, really. The moat was that it took time and expertise to generate that data, and AI is collapsing both. The winners will be the companies that integrate this stuff fastest, not the ones with the biggest historical data vaults.
If you are just watching from the sidelines: The interesting question is not what Mythos does. It is what happens when the third, fourth, and fifth AI lab ships something similar, and some of those labs are in countries that do not hold emergency meetings with the Fed.
The Mythos story is not really about one model. It is the first time the gap between what frontier AI can actually do and what the rest of the world is prepared for has been visible enough that central bankers had to react in public. That gap is not closing. It is the story of the next decade of AI, and this was just the first week we all got to see it.
See you next week.
Stephen
Sources and further reading
- Anthropic, Project Glasswing announcement
- Anthropic Red Team, technical blog post on Claude Mythos Preview
- UK AI Security Institute, independent evaluation of Mythos
- MarketScreener, cybersecurity stock sell-off coverage
- The Hill via AOL, Powell and Bessent meeting coverage
- The Next Web, Bank of England briefing UK banks
- Scientific American, what is Mythos and why experts are worried