
AI Labs Move Into Pentagon, Leaving Ethics Behind
OpenAI and Anthropic are colliding with the Pentagon over how far AI should go in war, exposing how private labs now shape lethal decisions with few public rules.
The most advanced commercial AI systems on earth are now helping pick targets and shape war plans inside secure U.S. military networks — without any clear, public rules about how far they can go.
In just weeks, a fragile experiment in self‑regulation has erupted into a raw power struggle between the Pentagon and Anthropic, while OpenAI has stepped in with a more permissive deal. Together, the episodes show how a handful of private labs are becoming de facto arms suppliers, deciding in private what counts as responsible use of AI in war.
Anthropic’s Claude model has quietly sat at the center of this shift. Through a government-focused "Claude Gov" offering, the system became the only large AI model cleared for use in U.S. classified missions, including a 2026 raid in Venezuela that killed dozens and captured President Nicolás Maduro, according to reporting cited in public documentation of the model’s deployment.
That success turned Anthropic into both indispensable contractor and inconvenient moral speed bump. The company’s usage policy explicitly bans support for lethal autonomous weapons and mass domestic surveillance, even when such uses might be lawful under current U.S. rules. Defense officials have pushed back hard, insisting that military use should be constrained only by law, not by a private vendor’s ethics code.
Over recent days, the standoff escalated into open confrontation. The Pentagon demanded that Anthropic let the military use Claude for "all lawful purposes" on its internal networks, including in classified environments. According to accounts in the Washington Post, the Guardian and others, Defense Secretary Pete Hegseth threatened to cancel contracts worth up to $200 million, label the firm a "supply‑chain risk" and even invoke the Defense Production Act to force access if the company did not drop its guardrails.
Anthropic CEO Dario Amodei refused, arguing that current frontier models are too unreliable to safely power fully autonomous weapons and that existing surveillance law does not anticipate AI tools capable of scanning and inferring across whole populations. His position briefly made Anthropic a rare counterweight to the national security apparatus — and a test case for whether a lab can say no to a superpower client.
The Trump administration responded by designating Anthropic a national security threat and blacklisting its technology across the federal government, according to the Washington Post. Yet reporting in the Guardian suggests Claude was still used by the U.S. military in a major joint strike with Israel against Iran, even after the designation, underscoring how quickly AI can become embedded in operations once deployed.
Into that vacuum has stepped OpenAI. Axios and other outlets report that the company has agreed to supply its models, including ChatGPT‑class systems, for use inside classified Pentagon environments under terms that allow any "lawful" use. Unlike Anthropic, OpenAI did not insist on explicit contractual bans on bulk collection of Americans’ public data or on specific military applications such as autonomous weapons. Instead, the company points to technical safeguards and internal policies, while acknowledging that public optics around the Pentagon deal have been poor.
For the Defense Department, this flexibility is a feature, not a bug. Officials argue that they already follow U.S. law and internal policies on surveillance and weapons, and that letting a contractor’s terms of service dictate battlefield options would undermine democratic control. From that perspective, an "any lawful use" clause simply restores the traditional balance between government and supplier.
Critics see something more troubling: an ethics‑free battlefield emerging by default. Because U.S. surveillance law and weapons policy barely address the specific risks of large-scale AI systems, allowing "anything legal" effectively hands enormous discretion to defense agencies operating under outdated rules. At the same time, shifting from a company that tried to hard‑code limits into its contracts to one that defers to government creates a clear competitive signal: ethics that bite can cost you the biggest customer on earth.
The result is a new kind of AI arms race. Google and Elon Musk’s xAI have also signed Pentagon deals framed around broad, lawful use, including in classified settings. With Anthropic sidelined and OpenAI filling the gap, major labs now face a stark choice: align with expansive military demands, or risk being painted as a security risk and cut out of vast government markets.
Regulators are badly behind. There is no dedicated U.S. law that governs how commercial frontier models may be used in targeting, surveillance or autonomous systems, nor any public, binding framework for how military AI contractors must log, audit or disclose battlefield uses. Instead, the world is watching a live experiment in self‑governance by private companies whose incentives tilt toward compliance with the most powerful buyer.
That experiment is already influencing lethal decisions. It is also quietly rewiring who controls the rules of war: not just presidents, generals and legislators, but boards, safety teams and product managers inside a handful of AI labs. Whether democracies can reclaim that authority before the next conflict tests these systems at scale may be one of the defining security questions of the decade.
Tags