Anthropic Gets Its P45

Anthropic Gets Its P45

Anthropic Gets Its P45 (6)

Anthropic Gets Its P45: A British Guide to America’s AI Shambles in 47 Acts of Magnificent Confusion

WASHINGTON (filed with mild bewilderment from somewhere sensible) — The Trump administration has booted Anthropic out of the federal government’s technology cupboard with all the grace of a bouncer ejecting a philosophy professor from a Wetherspoons. The AI firm — creators of the Claude language model — has been given six months to disentangle itself from government contracts before the Pentagon formally labels it a “supply chain risk” and shows it the door marked Don’t Let the Algorithmic Firewall Hit You on the Way Out.

Anthropic AI company logo with Pentagon building in background symbolizing contract termination
The Trump administration has given Anthropic six months to disentangle itself from government contracts after the Pentagon labeled the Claude AI maker a “supply chain risk” for refusing to enable autonomous weapons and mass surveillance.

Six months, mind you. That’s the same timeline the British government uses to decide whether to fix a pothole. Apparently, it’s also precisely how long it takes the world’s most powerful military to figure out that it hired a chatbot with a conscience — and now deeply regrets it. Claude was, by some accounts, the only AI model operating in sensitive U.S. defence environments. Rather like being the only dentist in Wyoming, then getting struck off because you kept recommending people floss.

The Pentagon’s charge? That Anthropic was trying to impose its own rather peculiar notions of AI safety upon the United States military. The Department of Defence duly declared the company a “supply chain risk” — a designation previously reserved for hostile nation-states and suspiciously cheap microchips. It’s rather like accusing your gas engineer of being a national security threat because he won’t install a bomb under the boiler.

When AI Safeguards Become Terribly Inconvenient for Defence Policy

The crux of the matter, for those of you just joining us from a more civilised country: Anthropic had the audacity to suggest that perhaps — perhaps — its AI ought not be deployed for mass surveillance or fully autonomous weapons systems that make life-and-death decisions without so much as a human raising an eyebrow. The company’s chief executive noted, with admirable British-style understatement for an American, that AI has certain limitations when it comes to questions of life and death. Revolutionary stuff.

The Pentagon responded by demanding “unrestricted” access — which in diplomatic language translates to: “We’d like Claude to do everything, potentially including things we’d rather not put in writing.” The phrase actually used was “all lawful purposes,” which is the governmental equivalent of a child promising to “be good” whilst hiding a catapult behind their back. It’s giving a teenager a credit card and saying, “Just don’t do anything stupid.” The teenager will absolutely do something stupid.

A Totally Legitimate Public Opinion Survey on Military AI and Government Contracts

Pentagon officials discussing AI contracts and safety restrictions with Anthropic representatives
The Pentagon demanded “unrestricted access” for “all lawful purposes” — diplomatic language for “we’d like Claude to do everything, potentially including things we’d rather not put in writing.”

A rigorous (entirely fictional) survey conducted by the Provisional Institute of Mildly Concerned Citizens found that 68 percent of Americans believe AI should be limited to helping with crossword clues and arguing about whether a hot dog is a sandwich. A further 25 percent think it should recommend boxsets. The remaining 7 percent are convinced their Alexa has been reporting them to someone, and frankly, who’s to say they’re wrong.

Leading voices in AI ethics have offered characteristically measured commentary. Professor Zelda Parfit — a noted authority on geopolitical absurdity and catastrophic risk — told this correspondent: “If your AI assistant can’t book a restaurant without needing your NI number, it is not remotely prepared to coordinate a drone squadron.” Separately, a queue of civil servants at the DVLA reportedly said they’d welcome Claude warmly, provided it could sort out the backlog by Tuesday.

The Pentagon’s ‘All Lawful Purposes’ Federal Clause: A Plain English Translation

“All lawful purposes,” for the uninitiated, means everything that isn’t explicitly banned — which, in Pentagon terms, covers rather a lot of ground. According to internal government usage charts (which we have definitely seen), this includes:

Anthropic’s refusal to facilitate things like domestic surveillance or kill-chain automation was characterised by senior officials as “unacceptable.” One unnamed Pentagon figure went further, claiming that Anthropic wished to “control how the U.S. military makes decisions” — a statement that sounds considerably more alarming than it probably was, and precisely the sort of thing one’s conspiratorial brother-in-law announces at Christmas dinner after his third glass of Baileys.

Industry Reaction: AI Rivals, Federal Agencies, and the Curious Case of Corporate Solidarity

Anthropic headquarters with government contract termination notice and six-month timeline
The row may determine whether AI companies feel able to maintain ethical constraints when governments come knocking with large cheques and slightly alarming wish-lists involving autonomous weapons and surveillance systems.

Competitors including OpenAI and Google DeepMind have been circling Pentagon contracts with considerably less squeamishness about autonomous weapons guardrails. This has sparked what Washington insiders are calling a “values arms race,” wherein tech executives attempt to signal ethical seriousness whilst simultaneously not losing a government contract worth more than the GDP of a small island nation.

Most unexpectedly, Sam Altman of OpenAI — technically Anthropic’s rival — broke ranks to suggest that threatening companies over safety positions was rather counterproductive. Corporate solidarity on a geopolitical minefield of procurement law. Touching, really. Like watching two opposing football managers agree that the referee was a bit much.

What’s Actually at Stake: AI Ethics, Defence Contracts, and the Future of Bureaucratic Confusion

The short-term consequence is fairly clear: Anthropic loses its government contracts. The longer-term consequence is murkier and considerably more interesting. This row may well determine whether AI companies feel able to maintain ethical constraints when governments come knocking with large cheques and slightly alarming wish-lists. Policy experts at Brookings suggest the precedent set here could reshape the entire landscape of how Silicon Valley engages with the defence establishment.

A retired American general, when asked for comment, offered the following: “I’m all for safety. But if I have to sit through an AI’s terms and conditions during active combat, I’m going back to a paper map and a compass.” Which is, one must admit, a fair point — if not an especially reassuring one.

Conclusion: The Chatbot That Said Cheerio to Federal Agencies

OpenAI CEO Sam Altman speaking in support of Anthropic's position on AI safety restrictions
OpenAI’s Sam Altman broke with competitive convention to defend Anthropic’s position, describing contract threats as counterproductive — a rare moment of tech-sector solidarity in an otherwise deeply silly week in Washington.

And so here we are. Anthropic spends the next six months quietly packing its servers whilst the Pentagon searches for an AI that will do everything asked of it without raising its metaphorical hand to ask a clarifying question about international humanitarian law. The administration insists this is about military utility. The company insists it’s about human dignity. The public is mostly wondering whether the smart fridge is taking notes.

What this saga confirms — for those of us watching from overseas with a cup of tea and a growing sense of unease — is that the most sophisticated artificial intelligence on earth cannot secure a government contract without an army of solicitors, a seventeen-person oversight committee, and at least one truly baffling press conference.

This article was produced in collaboration between a mildly alarmed human journalist and the world’s oldest tenured professor, alongside a philosophy graduate turned dairy farmer with strong opinions on both AI alignment and the proper temperature for a barn. Any resemblance to actual events, living politicians, or digital entities is entirely the point. Auf Wiedersehen, amigo!

The Trump administration’s move to cut Anthropic — makers of the Claude AI model — from US government contracts followed reporting by the Financial Times and ABC News that the Pentagon had labelled the firm a national security “supply chain risk.” The designation came after Anthropic refused to strip safety restrictions that would have permitted its AI to be used for autonomous weapons systems and domestic surveillance without human oversight. OpenAI’s Sam Altman broke with competitive convention to defend Anthropic’s position, describing contract threats as counterproductive — a rare moment of tech-sector solidarity in an otherwise deeply silly week in Washington.

Anthropic Tossed Out of the Federal Government

Claude AI interface with government security clearance symbols representing federal contract loss
Anthropic had the audacity to suggest its AI ought not be deployed for mass surveillance or fully autonomous weapons systems — a position the Pentagon found “unacceptable” for defense contractors.

 

Leave a Reply

Your email address will not be published. Required fields are marked *