Anthropic weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it?

ChainNewsAbmedia

Bloomberg reports that a private forum group allegedly 공개ly announced on the same day that it had broken through restrictions for the security model Mythos, which is part of Anthropic’s security models, by using access permissions held by third-party contractors to successfully enter the system to use the model, raising concerns from the outside world about the safety governance of top-tier AI models.

(Anthropic launched its global cybersecurity initiative Glasswing, so why isn’t the new model Mythos open to the public? )

Mythos was hit by unauthorized access on its first day online

On April 7, Anthropic announced a new network security AI model, Claude Mythos; however, a private online forum group whose identity has yet to be made public reportedly quietly obtained access to the model.

According to reports, this group did not break in using traditional hacking methods. Instead, they leveraged their knowledge of Anthropic’s past model URL formats to reasonably infer Mythos’s online location within the system. The key loophole was a staff member employed by an Anthropic third-party contractor. He already had legitimate authorization to view Anthropic AI models, and the forum group members infiltrated the system through this compliant entry point.

Afterward, the group provided Bloomberg with screenshots and a live demonstration of the actions as proof, and revealed that they have continued using Mythos up to now. However, they emphasized that their purpose was only “to tinker with a new model,” with no intention of carrying out any destructive activity, because they did not want to be discovered.

What is Mythos? Why has it raised concerns from the outside world?

Claude Mythos is an AI model built by Anthropic specifically for enterprise cybersecurity defense. The team defines it as a tool that is “too powerful to be suitable for public release.” Its core capability is to proactively identify security vulnerabilities in digital systems, helping enterprises complete patching before they are attacked.

However, this “defense sword” can also be a “double-edged blade.” Anthropic acknowledged that once Mythos falls into the hands of malicious actors, its capabilities could also be used to launch attacks. Therefore, the company, through a cybersecurity initiative called “Project Glasswing,” only opens Mythos to a small number of major institutions or technology companies that have undergone strict review.

The core assumption behind this closed-off governance mechanism is that trusted partners can ensure that each other’s access permissions will not leak.

(Anthropic Mythos raises regulatory concerns, and executives at Bestent and Powell’s banks hold an emergency meeting)

Anthropic’s response: We’re investigating; there’s no impact

In response, Anthropic said: “We are investigating a report claiming that Claude Mythos Preview was accessed without authorization through a third-party provider environment.” The company emphasized that, at present, it has not found that its own systems have been affected, and the incident is initially believed to be “more likely abuse of access permissions than an external hacking attack.”

Even if users who got early access to Mythos have not engaged in malicious behavior, the incident itself still has cybersecurity experts on high alert. Raluca Saceanu, CEO of the cybersecurity company Smarttech247, pointed out:

Once powerful AI tools are accessed or used outside established governance mechanisms, the risk is not limited to a cybersecurity incident; it could also raise concerns about fraud, cyber abuse, or other malicious uses.

What impact will this have? Weak points in AI security controls

What truly concerns people about this incident is not that someone tried to sabotage it, but the systemic weakness it reveals: when an AI company hands access to highly sensitive models to third-party vendors, any lapse in any link in the entire control network could become a loophole and trigger a crisis.

Now, the Mythos incident serves as a reminder to the entire industry that, as AI capabilities advance rapidly, the design of security architecture cannot rely on trust alone. It also needs institutional resilience that can withstand trust failing. For Anthropic, how to rebuild the public’s confidence in its partner control mechanisms will be a more long-term challenge than the investigation itself.

This article, Anthropic’s weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it? First appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

SpaceX Estimates Total Addressable Market at $28.5 Trillion, With $26.5T From AI Sector

Gate News message, April 23 — SpaceX estimates its total addressable market (TAM) at $28.5 trillion, according to internal documents. The company projects that over 90% of the market, approximately $26.5 trillion, will come from artificial intelligence. Enterprise AI is expected to account for the

GateNews8m ago

《Naval Handbook》— Naval launches the AI fund USVC, allowing retail investors to invest in OpenAI and Anthropic before listing

Silicon Valley’s well-known investor Naval’s AngelList recently launched a new fund called USVC, positioning it as a way for everyday investors to indirectly participate in hot private tech companies such as OpenAI, Anthropic, xAI, Vercel, Crusoe, Sierra, and Legora with a minimum threshold of just $500. The official messaging frames it as “investing in building future companies before it all becomes obvious,” and emphasizes that it’s a fund open to all investors that does not require accredited investor status. It aims to transform venture capital assets that previously were only accessible to the wealthy and insiders into a product that retail investors can also reach. Invest with $500 to invest in early AI companies USVC’s core narrative is straightforward: as more and more star startups choose to stay in the private market for longer periods, the truly explosive valuation growth often happens in I

ChainNewsAbmedia14m ago

Tesla to Acquire AI Hardware Company for Up to $2 Billion

Gate News message, April 23 — Tesla announced on April 23 that it has agreed to acquire an artificial intelligence hardware company for up to $2 billion in Tesla common stock and equity awards. Approximately $1.8 billion

GateNews18m ago

Top law firms charge more than $2,000 per hour; court documents were exposed for “AI hallucinations and a string of errors.”

A court document filed by top U.S. law firm Sullivan & Cromwell in a bankruptcy case in Manhattan contained about thirty instances of AI-generated errors, false case citations, and fabricated provisions, prompting an apology to the judge. Despite the high hourly rates and internal training policies, the review was not actually implemented during preparation, and the incident has once again sparked debate over the use of AI in the legal profession and ethical responsibility.

ChainNewsAbmedia35m ago

DeepSeek Open-Sources TileKernels, GPU Kernel Library for Large Model Training and Inference

Gate News message, April 23 — DeepSeek has open-sourced TileKernels under the MIT license, a GPU kernel library written in TileLang for large language model training and inference. TileLang is a domain-specific language developed by the tile-ai team for expressing high-performance GPU kernels in

GateNews44m ago

Samsung SDS Expands Google Cloud Partnership to Serve Regulated Sectors with AI and Security Services

Gate News message, April 23 — Samsung SDS expanded its partnership with Google Cloud to deliver AI, cloud computing, and security services to regulated industries including government and financial services. The companies will deploy Google Distributed Cloud for customers requiring data

GateNews1h ago
Comment
0/400
No comments