
Anthropic is testing the most powerful AI model it has ever constructed, and the world wasn’t presupposed to know but.
A data leak reported by Fortune on Thursday revealed that the AI lab behind Claude has educated a brand new model known as “Mythos,” which it internally describes as “by far the most powerful AI model we’ve ever developed.”
The model was found in a draft weblog publish left in an unsecured, publicly searchable data cache, alongside practically 3,000 different unpublished belongings, based on cybersecurity researchers who reviewed the fabric.
Anthropic confirmed the model’s existence after Fortune’s inquiry, calling it “a step change” in AI efficiency and “the most capable we’ve built to date.” The firm mentioned it’s being trialed by “early access customers” and acknowledged {that a} “human error” in its content material administration system precipitated the leak.
The draft weblog publish launched a brand new model tier known as “Capybara,” described as bigger and extra succesful than Anthropic’s current Opus fashions, which have been beforehand its most powerful.
“Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others,” the draft mentioned.
It’s the cybersecurity dimension that issues most for the crypto trade. The draft weblog publish mentioned the model “poses unprecedented cybersecurity risks,” a framing that has direct implications for blockchain safety, good contract auditing, and the escalating arms race between attackers and defenders in DeFi.
This week alone, Ripple introduced an AI-driven safety overhaul for the XRP Ledger after an AI-assisted purple group uncovered greater than 10 vulnerabilities in its 13-year-old codebase. Ethereum launched a devoted post-quantum safety hub backed by eight years of analysis.
And the Resolv stablecoin misplaced its peg after an attacker exploited a minting contract with no oracle checks and single-key entry management, the type of infrastructure failure that extra succesful AI instruments may probably establish earlier than an attacker does, or exploit quicker than defenders can reply.
For the AI token market, the leak raises a unique query. Bittensor’s decentralized community lately launched Covenant-72B, a model that competes with Meta’s Llama 2 70B, triggering a 90% rally in TAO and driving subnet tokens to a mixed market cap of $1.47 billion.
A “step change” from a centralized lab like Anthropic resets the benchmark that decentralized AI tasks have to match. The aggressive distance between what a well-funded company lab can construct and what a permissionless community can produce simply received wider.
Anthropic mentioned it’s “being deliberate” in regards to the model’s launch given its capabilities. The draft weblog famous the model is dear to run and never but prepared for basic availability. The firm eliminated public entry to the data cache after Fortune contacted it.
The leak itself is its personal cautionary story. An organization constructing what it describes as an AI model with unprecedented cybersecurity capabilities left the announcement of that model in an unsecured, publicly searchable data retailer because of human error. The irony wants no elaboration.



