Bloomberg reports that a private forum group allegedly 공개ly announced on the same day that it had broken through restrictions for the security model Mythos, which is part of Anthropic’s security models, by using access permissions held by third-party contractors to successfully enter the system to use the model, raising concerns from the outside world about the safety governance of top-tier AI models.
(Anthropic launched its global cybersecurity initiative Glasswing, so why isn’t the new model Mythos open to the public? )
Mythos was hit by unauthorized access on its first day online
On April 7, Anthropic announced a new network security AI model, Claude Mythos; however, a private online forum group whose identity has yet to be made public reportedly quietly obtained access to the model.
According to reports, this group did not break in using traditional hacking methods. Instead, they leveraged their knowledge of Anthropic’s past model URL formats to reasonably infer Mythos’s online location within the system. The key loophole was a staff member employed by an Anthropic third-party contractor. He already had legitimate authorization to view Anthropic AI models, and the forum group members infiltrated the system through this compliant entry point.
Afterward, the group provided Bloomberg with screenshots and a live demonstration of the actions as proof, and revealed that they have continued using Mythos up to now. However, they emphasized that their purpose was only “to tinker with a new model,” with no intention of carrying out any destructive activity, because they did not want to be discovered.
What is Mythos? Why has it raised concerns from the outside world?
Claude Mythos is an AI model built by Anthropic specifically for enterprise cybersecurity defense. The team defines it as a tool that is “too powerful to be suitable for public release.” Its core capability is to proactively identify security vulnerabilities in digital systems, helping enterprises complete patching before they are attacked.
However, this “defense sword” can also be a “double-edged blade.” Anthropic acknowledged that once Mythos falls into the hands of malicious actors, its capabilities could also be used to launch attacks. Therefore, the company, through a cybersecurity initiative called “Project Glasswing,” only opens Mythos to a small number of major institutions or technology companies that have undergone strict review.
The core assumption behind this closed-off governance mechanism is that trusted partners can ensure that each other’s access permissions will not leak.
(Anthropic Mythos raises regulatory concerns, and executives at Bestent and Powell’s banks hold an emergency meeting)
Anthropic’s response: We’re investigating; there’s no impact
In response, Anthropic said: “We are investigating a report claiming that Claude Mythos Preview was accessed without authorization through a third-party provider environment.” The company emphasized that, at present, it has not found that its own systems have been affected, and the incident is initially believed to be “more likely abuse of access permissions than an external hacking attack.”
Even if users who got early access to Mythos have not engaged in malicious behavior, the incident itself still has cybersecurity experts on high alert. Raluca Saceanu, CEO of the cybersecurity company Smarttech247, pointed out:
Once powerful AI tools are accessed or used outside established governance mechanisms, the risk is not limited to a cybersecurity incident; it could also raise concerns about fraud, cyber abuse, or other malicious uses.
What impact will this have? Weak points in AI security controls
What truly concerns people about this incident is not that someone tried to sabotage it, but the systemic weakness it reveals: when an AI company hands access to highly sensitive models to third-party vendors, any lapse in any link in the entire control network could become a loophole and trigger a crisis.
Now, the Mythos incident serves as a reminder to the entire industry that, as AI capabilities advance rapidly, the design of security architecture cannot rely on trust alone. It also needs institutional resilience that can withstand trust failing. For Anthropic, how to rebuild the public’s confidence in its partner control mechanisms will be a more long-term challenge than the investigation itself.
This article, Anthropic’s weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it? First appeared on Chain News ABMedia.
Related Articles
SpaceX Estimates Total Addressable Market at $28.5 Trillion, With $26.5T From AI Sector
《Naval Handbook》— Naval launches the AI fund USVC, allowing retail investors to invest in OpenAI and Anthropic before listing
Top law firms charge more than $2,000 per hour; court documents were exposed for “AI hallucinations and a string of errors.”
DeepSeek Open-Sources TileKernels, GPU Kernel Library for Large Model Training and Inference
Samsung SDS Expands Google Cloud Partnership to Serve Regulated Sectors with AI and Security Services