Thomas L. Friedman argues that Anthropic’s limited release of Claude Mythos Preview should be read not as a product launch, but as a warning flare. The company says the model can do far more than generate advanced software code. In testing, it also identified thousands of serious vulnerabilities, including flaws across all major operating systems and web browsers.
That shifts the conversation. The issue is no longer simply whether A.I. can help engineers build faster. It is whether a new generation of models can make high-end cyber offense dramatically cheaper, faster and more accessible.
Details
• Anthropic limited access to roughly 40 major technology and infrastructure-linked firms, including companies such as Google, Nvidia, Apple, Microsoft, Amazon and JPMorganChase.
• According to the article, the company concluded that the same capability that makes the model exceptionally strong at software development also makes it unusually powerful at detecting exploitable weaknesses in widely used software.
• Friedman stresses that this is not being framed as marketing theater. He writes that major technology companies were already in quiet discussions with the Trump administration about the national security implications before the announcement became public.
• Anthropic’s public explanation is that the model has already uncovered thousands of high-severity vulnerabilities. That matters because the same systems under discussion help run electricity grids, water systems, hospital networks, airline platforms, communications infrastructure and military-linked environments.
• The central fear is straightforward: if tools of this class become widely available, sophisticated cyber intrusion may stop being the preserve of intelligence agencies, elite private-sector teams and well-funded criminal groups. Smaller actors could gain access to capabilities that were once expensive and rare.
• Friedman then pushes the argument into strategic territory. He says the U.S. and China, as the two leading A.I. powers, may need to cooperate to stop these capabilities from diffusing to malicious actors. In his framing, this starts to resemble a nonproliferation problem rather than a standard technology policy debate.
• He cites Craig Mundie, the former Microsoft executive, to argue for three urgent steps: tightly control access to the most advanced models, use the delay to harden software and distribute defensive tools, and build secure protected environments for critical public and private services.
• The article’s deeper point is that this may be one of those threshold moments when a technical advance suddenly becomes a global governance problem.
What next?
The real question is whether governments and major technology firms move fast enough to build a defensive buffer before this level of capability spreads more widely. If they do not, the next phase of A.I. may be defined less by productivity gains and more by systemic cyber vulnerability.