Global news delivering clear signals on what matters next

-

Technology

Anthropic: We’ve Entered the Scary Phase of AI!

Facebook
LinkedIn
X
Facebook
1- Anthropic unveiled Claude Mythos Preview, restricting access due to its advanced offensive cybersecurity capabilities.
2- The model demonstrated the ability to bypass sandbox environments and build multi-step exploits, with access limited via Project Glasswing.
3- The warning is clear: the threat is no longer theoretical, as governments and tech firms face AI systems with near-superhuman cyber capabilities


Anthropic says artificial intelligence has entered a “scary phase” as it begins a tightly restricted release of its Mythos model. According to informed sources, the system could theoretically take down a Fortune 100 company, disrupt parts of the internet, or breach sensitive defense systems if misused. For that reason, the company has not made it public, limiting access to a small, vetted group of partners.

The core concern lies in Anthropic’s own evaluation: during testing, the model showed highly advanced capabilities in identifying and exploiting vulnerabilities, even managing to bypass restrictions within an isolated testing environment. This prompted the company to treat it as a system requiring fundamentally different safeguards, compared to previous models. 

Details

  • Anthropic launched Project Glasswing to grant access to Claude Mythos Preview only to selected entities, including major players such as Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, JPMorgan Chase, and the Linux Foundation
  • Reuters reported the initiative is designed for cybersecurity defense, with $100 million in model usage credits and $4 million allocated to support open-source institutions
  • Coverage by Wired and The Verge noted that the model wasn’t specifically trained for hacking, but its programming and reasoning abilities make it highly effective at analyzing systems, discovering critical vulnerabilities, and constructing attack paths similar to skilled security researchers
  • Some U.S. officials fear that senior decision-makers have yet to grasp the scale of this leap, while reports warn that similar models could emerge from other companies, including in China, in the near future
  • Anthropic also revealed that a state-backed Chinese group previously used an older Claude model to target around 30 organizations in a coordinated attack before it was detected

What’s Next?
The industry is watching whether this restricted-release approach becomes the new standard for handling high-risk AI models—limited access, vetted partners, and defensive use before broader deployment.

(Analysis)
The significance of this development goes beyond technology. Anthropic is not describing routine improvements in coding or productivity, but a dual-use capability: a defensive tool that can stay ahead of attackers, and an offensive tool that could become dangerous without strict controls. The “scary phase” reflects not just smarter AI, but a much narrower gap between protection and exploitation, based on disclosures from Anthropic and reporting by Axios, Reuters, and Wired.

What to read next

Middle East

-

Trump’s Ceasefire: A 10-Day Truce Under U.S. Pressure and Lebanese-Israeli Doubts!

Technology

-

Starmer Summons U.S. Social Media Companies Over Child Safety Online!

The World

-

A War It Didn’t Start: Africa Pays the Price for the US-Iran Conflict

Art & Culture

-

Hollywood stars unite to oppose Paramount-Warner merger.

Technology

-

UK-Ukraine Firm Defeats US Rival in Military Drone Race!

Middle East

-

Widening ceasefire or return to war? Washington tests a Lebanon off-ramp while negotiating with Iran under pressure from reality!