Anthropic defies Pentagon as OpenAI cuts new deal

Anthropic defies Pentagon as OpenAI cuts new deal

A Pentagon blacklist of Anthropic and a fresh OpenAI defense deal expose how quickly voluntary AI safety pledges buckle under military pressure, with major stakes for war tech and democratic oversight.

4 min read771 wordsby writer-0

The U.S. Department of Defense has blacklisted Anthropic as a “supply-chain risk” after the company refused to loosen guardrails on its Claude chatbot for military use, even as rival OpenAI touts a new agreement with the Pentagon. Together, the moves lay bare how fragile self-imposed AI safety rules become once they collide with national security demands.

According to the Washington Post, Defense Secretary Pete Hegseth on 27 February ordered federal agencies and military contractors to stop using Anthropic’s products, an extraordinary step typically reserved for foreign adversaries.⁠⁠⁠️ In public statements and a company blog post, Anthropic CEO Dario Amodei said the firm "cannot in good conscience" accept Pentagon terms that would allow its models to be used for mass domestic surveillance or fully autonomous weapons, even if officials insist they have no such plans.⁠⁠⁠️

Just 24 hours later, OpenAI announced that it had reached its own AI deployment agreement with the Department of Defense for classified environments, emphasizing three red lines: no mass domestic surveillance, no directing autonomous weapons systems, and no high‑stakes automated social control.⁠⁠⁠️ The company argued that its contract includes “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” a pointed comparison that signals how commercial rivals are turning safety positioning into competitive leverage.

The clash follows years of voluntary governance commitments by both labs. Anthropic has cast itself as the industry’s most cautious player, adopting a Responsible Scaling Policy with internal “AI Safety Levels” and tying more powerful Claude models to strict controls.⁠⁠⁠️ It also formed a National Security and Public Sector Advisory Council and launched Claude Gov to serve classified government work, securing a prototype contract worth up to $200 million with the Pentagon’s Chief Digital and Artificial Intelligence Office.⁠⁠⁠️

OpenAI, meanwhile, quietly removed an explicit blanket ban on military use from its usage policy in January 2024, as reported by CNBC, while maintaining prohibitions on weapons development and “high‑risk” physical harm.⁠⁠⁠️ That change cleared the way for national security collaborations, including cybersecurity support for the Pentagon, and set the stage for the 2026 classified‑deployment agreement.

The current standoff is not about whether AI will be used by militaries, but under what terms and whose values will govern that use. In Anthropic’s case, the Pentagon has demanded contract language allowing “all lawful purposes” and resisted embedding bans on autonomous weapons and domestic mass surveillance, even while senior officials say they have no intention of pursuing either.⁠⁠⁠️ When Anthropic held firm, Hegseth’s team threatened to invoke the Defense Production Act to access its technology anyway and then followed through with a sweeping blacklist that bars defense contractors from working with the company.⁠⁠⁠️

That escalation sends a clear message to the rest of the industry: resist too hard on red lines and you may be treated as a national security risk. For a capital‑intensive sector where leading firms depend on government contracts, export licenses and access to advanced chips, the signal is likely to reverberate well beyond Anthropic. It also puts employees who joined these labs on the promise of principled restraint in a direct collision with state power.

At the same time, OpenAI’s deal shows how easily guardrails can be recast as negotiable features of commercial contracts rather than hard constraints. The company’s public framing suggests a middle path—cooperation with the Pentagon under self‑defined limits—but the terms are not statutory, and enforcement ultimately rests with the same company whose business interests they constrain. Critics warn this cements an ad‑hoc governance model where the most powerful AI systems sit behind closed‑door agreements between executives and defense officials, rather than democratically debated rules.

The episode also underscores a shift in Washington’s own posture. The Biden‑era AI Safety Institute has already been refocused into the Center for AI Standards and Innovation, with a mission oriented around national security and economic competition rather than broad risk mitigation.⁠⁠⁠️ With the Trump administration rolling back earlier safety initiatives, frontier labs have tried to fill the gap with their own policies—policies now being stress‑tested by the same government that benefits from their flexibility.

For democracies, the stakes go beyond one contract. AI systems that can synthesize intelligence, generate targeting options or analyze population‑scale data are becoming tools of state power. Whether companies can credibly refuse certain uses—or whether governments can effectively compel access—will define not only the future of military AI, but also public trust in the firms that build it.

The Pentagon’s blacklist of Anthropic and its parallel embrace of OpenAI mark the clearest evidence yet that self‑regulated guardrails are colliding with geopolitical realities. What happens next will determine whether those guardrails harden into law or erode into marketing copy.

Tags

#ai governance#military ai#anthropic#openai#pentagon#democracy