Discord hackers accessed Anthropic Mythos AI by vendor and URL guessing

Crazy! Posting this because it’s so relevant to the vendor-environment stuff we keep coming back to. Bloomberg broke this story earlier this week I seen it on X, then Wired, TechCrunch, Engadget, Fortune all picked it up. Basically a small group of Discord users got unauthorized access to Anthropic’s Mythos AI, which is the model Anthropic specifically billed as too dangerous for general release because of how good it is at finding software vulnerabilities.

Here’s my take, I used AI to make this but it’s very detailed:



--

Rundown of what’s been reported:

The model is Claude Mythos Preview, part of “Project Glasswing.” Anthropic limited access to ~40 trusted partners including Apple, Microsoft, Google, Amazon, Cisco, and Mozilla. Mozilla reportedly used it to find and patch 271 Firefox vulnerabilities. The selling point and the scary part are the same thing: it can find flaws fast enough that the typical patch window collapses from days to hours.

The Discord group apparently got in on April 7, 2026, same day Anthropic announced Project Glasswing publicly. Method was a mix of three things, per Bloomberg:

  1. One group member is reportedly employed at a third-party contractor for Anthropic, so they had vendor-side access to pivot from.

  2. They guessed the URL where the model was hosted by extrapolating from Anthropic’s internal naming conventions.

  3. The naming convention info reportedly came from an earlier breach at Mercor, an AI training startup, where some other group had gotten visibility into Anthropic’s deployment practices. So this is a chained-breach situation: data from breach A enables access B.

What they did with it: per Bloomberg, “building simple websites” and general exploration. No malicious use reported. They also claim to have access to other unreleased Anthropic models, though that part isn’t confirmed.

Anthropic’s statement: they’re “investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments” and say there’s no evidence the activity impacted Anthropic’s own systems. Worth noting a ShinyHunters impersonator tried to take credit a couple days later with AI-generated screenshots, which researchers dismissed as fake.

Why this matters for this forum tbh:

The pattern is identical to the Discord customer-service breaches we’ve been tracking. Doesn’t matter how locked-down the primary company is if a vendor with legitimate access becomes the soft entry point. Same playbook, different industry.

Plus the Discord coordination angle: per reporting, the group runs a private Discord channel with bots that scrape GitHub for hints about unreleased AI models. That’s the same kind of low-grade OSINT that catches a lot of people who don’t realize their commits and configs are public.

David Lindner (CISO at Contrast Security) made the obvious follow-on point in Fortune: if a hobbyist Discord group got in within hours of release, assume better-resourced state actors did too. He went so far as to say if a Discord group got it, “it’s already been breached by China.”

Sources:

https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/

https://www.engadget.com/ai/anthropic-is-investigating-unauthorized-access-of-its-mythos-cybersecurity-tool-091017168.html

https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/

https://www.techbrew.com/stories/2026/04/23/random-discord-group-got-anthropic-mythos-before-cisa

https://cybernews.com/security/anthropic-mythos-ai-unauthorized-access/

I’m jealous, I’d love to get a chance to use that before everyone else. Isn’t Mythos the one where they requested an immediate meeting with the administration as a matter of national security? Or am I thinking od a different one? If it’s what I am thinking, those guys who accessed it likely made millions of dollars on the unknown security holes they were able to find and collect. Although, Anthropic can also see the history and what the hackers did and pre-warn those who may be in the crosshairs, but still.