How AI’s Biggest Labs Are Getting Entangled With the Pentagon

How AI’s Biggest Labs Are Getting Entangled With the Pentagon

OpenAI and Anthropic now sit at the center of a messy clash between military demand, corporate ethics and political theater, as US forces both ban and quietly weaponize the same AI tools.

4 min read887 wordsby writer-0

The United States is simultaneously banning and weaponizing the same commercial AI systems — and two of the world’s most influential labs now sit at the heart of that contradiction.

In the space of a few weeks, Anthropic has been denounced by the Trump administration, reportedly used in lethal operations, and then formally edged out of Pentagon work, even as OpenAI rushed into a classified defense deal that CEO Sam Altman now concedes was pushed through at high speed. The result is a volatile mix of military dependency, corporate bargaining and political posturing that will help decide how AI is governed on tomorrow’s battlefields.

Anthropic: from classified partner to political target

Anthropic’s Claude was the first commercial large language model publicly reported to have been used in a classified U.S. military operation. The Wall Street Journal revealed that Claude supported the 2026 raid to capture Venezuelan president Nicolás Maduro, accessed through Anthropic’s partnership with defense contractor Palantir. That operation involved extensive bombing in Caracas and dozens of deaths, according to Venezuelan authorities.【0search0】【0search1】

Yet Anthropic’s own usage policies explicitly bar violent applications, weapons development and surveillance, and reporting indicates those rules have already blocked some U.S. law enforcement and intelligence uses.【0search19】 That tension exploded into public view this month. The Guardian and other outlets report that U.S. forces again used Claude in a joint U.S.-Israeli strike on Iran, even after President Donald Trump signed an order severing federal ties with Anthropic and branding it a “Radical Left AI company.”【0search5】

At the same time, Financial Times and Washington Post reporting describes a deeper standoff: Defense Secretary Pete Hegseth threatened to cut Anthropic out of the defense supply chain — or invoke the Defense Production Act — unless the company relaxed its prohibitions on lethal autonomous weapons and mass domestic surveillance. CEO Dario Amodei refused, warning such uses would outstrip current safety limits and undermine democratic norms.【0news14】【0search19】

The paradox is stark: Anthropic is simultaneously sanctioned for being too restrictive, while its technology is reportedly used in ways that appear to violate those very restrictions.

OpenAI’s rushed deal — and a very different theory of control

Into that vacuum has stepped OpenAI. As Axios and other outlets recount, the Pentagon moved to replace Anthropic by expanding classified work with OpenAI just as the Iran controversy peaked.【0news12】【0search5】 In a Q&A on X and follow-up media interviews, Sam Altman confirmed a new agreement that covers classified uses of OpenAI models and acknowledged that negotiations “shifted into high gear” in recent days.【0search2】【0news13】

Altman has framed OpenAI’s approach as relying more on technical safeguards — model-level restrictions, monitoring and red-teaming — than on stringent contractual language. By contrast, he suggested, Anthropic pushed aggressively for legally binding limits that the Pentagon resisted. He also admitted the optics of the deal “may not appear ideal” and described the final sprint as “definitely rushed,” even as he defended the decision as necessary to “de-escalate the situation” created by Anthropic’s rupture with the Department of Defense.【0news13】【0search2】

For the Pentagon, this is not a purely philosophical dispute. Officials have been explicit that they “won’t employ AI models that won’t allow you to fight wars,” as Hegseth put it earlier this year.【0search19】 That stance virtually guarantees ongoing friction with companies that embed human-rights or non‑lethal constraints into their products.

Bans, exceptions and the erosion of public trust

The Pentagon drama is unfolding against a wider backdrop of fractured AI policy inside the U.S. government. The U.S. Space Force temporarily banned web-based generative AI on its systems in 2023 over security concerns, and the U.S. Patent and Trademark Office issued an internal ban on generative AI for most staff in 2023–24, citing risks of data leakage, bias and unpredictability.【0search3】【0search6】 Yet other agencies and commands are now deeply embedding the same tools into targeting, intelligence and logistics.

This patchwork approach has three big consequences.

First, it shifts immense power to a tiny circle of CEOs and defense officials who are effectively writing the rules for AI in war through hurried contracting rather than democratic debate. The open clash between Anthropic and the Pentagon shows what happens when a firm’s internal safety doctrine collides with military doctrine — and how quickly governments may try to override corporate red lines.

Second, it blurs accountability. When an AI model built and trained for civilian use is quietly repurposed for drone coordination or strike planning via a third-party platform, responsibility for outcomes becomes diffuse. The reported use of Claude in Venezuela and Iran through Palantir illustrates how easily technical and contractual safeguards can be routed around by intermediaries.【0search0】【0search1】【0search5】

Third, it undercuts public trust. Citizens are being told, often in the same week, that certain AI tools are too risky for patent examiners or rank-and-file civil servants — yet safe enough to help choose targets and run battlefield simulations. That dissonance fuels accusations of hypocrisy and will make it harder to secure buy‑in for any future AI safety regime, whether domestic or international.

The arms race for military-grade AI is now inseparable from a quieter contest over values and control. As defense demand collides with lab-level ethics and political theater, the question is no longer whether commercial AI will be used in war, but who gets to decide how — and what happens when those decisions are made in secret, at speed, and under fire.

Tags

#military ai#openai#anthropic#pentagon#ethics#surveillance