Here's a number that should wake you up: 20 out of 21. That's how many simulated nuclear crisis games ended in at least one tactical nuclear weapon being detonated when researchers at King's College London let AI models play the players. And in all 21 matches — every single one — no AI ever chose to de-escalate [1]. That study dropped the same week the US government blacklisted Anthropic, OpenAI signed a classified military AI deal, Elon Musk's xAI struck its own Pentagon agreement, and a Florida family filed a wrongful death lawsuit against Google because of what Gemini told their son to do. This is the week AI governance stopped being a think-tank topic. It's here, it's messy, and nobody actually has the answers.
Claude Gets Blacklisted
The dispute started over a $200 million federal contract Anthropic won — and then almost immediately imploded over. The company sought guarantees that its Claude AI wouldn't be used for mass domestic surveillance or autonomous weapons systems that could fire without a human in the loop. The US government refused to agree to those terms. The consequence: Anthropic was designated a "supply chain risk," which effectively banned the company from US defense contracts and told every federal contractor to stop using Claude [1] [2]. The downstream impact was immediate. Treasury, NASA, OPM, HHS, the State Department — agencies already using Claude in daily workflows — were forced into compliance planning. At Treasury, developers were migrating from Claude Code to Codex, Gemini, and Grok. NASA's internal chatbot projects, built specifically on Claude, were more exposed. The hidden complication: many agencies were running Claude through intermediaries like Palantir or AWS, meaning the real audit burden wasn't just "change the model name" — it was revalidating security outputs across entire government systems [1]. Anthropic CEO Dario Amodei said the company saw "no choice but to challenge it in court," and on March 9th, Anthropic filed a federal lawsuit in California arguing the designation was unlawful and violated the company's free speech and due process rights. The ask: undo the designation and block federal agencies from enforcing it.




