
U.S. Military Turns to Claude AI for Iran Targeting
The U.S. military has used Anthropic’s Claude AI to support real targeting decisions in Iran, even as the Pentagon moves to blacklist the company—exposing a fast‑deepening reliance on commercial AI in warfare and intensifying questions about oversight, accountability and industry–defense ties.
The U.S. military has quietly moved large‑scale generative AI from the lab into the kill chain. In recent strikes on Iran, commanders used Anthropic’s Claude model to analyze intelligence, simulate scenarios and help select targets, even after the Pentagon formally banned the company as a “supply‑chain risk” and ordered its systems phased out within six months.
That leap — from office productivity tool to decision support in live combat — is forcing an urgent reckoning over how much judgment can be delegated to opaque commercial models, and who is accountable when an AI‑shaped strike goes wrong.
Claude in the Iran campaign — after a formal ban
Reporting by The Wall Street Journal, echoed by outlets including The Guardian and CBS, indicates that U.S. forces used Claude to support air operations against Iran, including target selection and battlefield simulations, even after President Donald Trump announced that the government would sever ties with Anthropic and its tools. The Guardian said Claude was used “to inform” the attack, while CBS cited sources confirming the model helped enable the weekend strikes and remained in use afterward.1
A detailed incident report from the OECD’s AI Incident Monitor describes Operation Epic Fury, in which Anthropic’s services “alongside B‑2 bombers and drones” supported strikes on Iranian military infrastructure, explicitly linking the AI tools to lethal effects on the ground.OECD.AI While the systems did not autonomously fire weapons, they were embedded in what militaries call the kill chain — the sequence from identifying a target to destroying it.
At the same time, Defense Secretary Pete Hegseth has moved to push Anthropic out of the defense ecosystem. On March 4, the Pentagon designated the company a supply‑chain risk, a label usually reserved for foreign adversaries, triggering a six‑month phase‑out of Claude from U.S. military systems, according to Defense News and TechCrunch. Yet Washington Post reporting makes clear that, in the Iran campaign, Claude was “central” to the most advanced AI‑driven operations the U.S. has ever fielded.The Washington Post
Lawsuits, exemptions and a new kind of military–industry bind
Anthropic is now suing the Pentagon, arguing that the supply‑chain risk label punishes it for insisting on safeguards against fully autonomous weapons and mass surveillance. The lawsuit, reported by Axios and Forbes, points to threats from officials to both blacklist the company and simultaneously treat its AI as so essential to national security that they could invoke the Defense Production Act to force looser restrictions.
That contradiction captures the bind facing governments and labs. Claude has been extensively deployed across the Defense Department and other security agencies for intelligence analysis and operational planning, as The Washington Post reported. An NBC investigation into AI‑enabled Iran strikes found that Palantir software, drawing in part on Claude, is now helping pinpoint targets for airstrikes, with lawmakers already calling for clearer guardrails and oversight hearings.NBC Los Angeles
Even as the Pentagon moves to excise Anthropic, it is racing to slot in alternative models. TechCrunch and Defense News report that OpenAI’s ChatGPT and other systems are being lined up to run on classified networks, deepening concerns that the government is swapping one black‑box commercial system for another rather than resolving foundational questions about testing, auditing and accountability.
From academic prototype to lethal infrastructure
The United States is not the first military power to lean on AI for targeting; Israel’s use of tools like Habsora and Lavender to generate bombing targets in Gaza has already sparked international criticism and fresh scholarship on “AI‑assisted targeting” and responsibility.Wikipedia What is new in the Iran case is the centrality of a general‑purpose, privately built language model — one also used in schools, offices and consumer apps — to real‑time decisions about life and death.
AI systems can process sensor feeds and intelligence reports far faster than humans, highlighting patterns and possible courses of action that would otherwise be missed. But they also hallucinate, encode bias and remain largely unreviewable, creating a risk that error‑prone recommendations acquire a veneer of algorithmic authority.
Lawmakers now weighing oversight proposals face a compressed timeline. Generative AI moved from novelty to operational asset in barely three years, and the same tools writing emails and code can now shape target lists and strike timing. Whether Congress, the Pentagon and private labs can build credible guardrails before the next conflict may determine not just how wars are fought, but who bears responsibility when an AI‑assisted decision kills the wrong people.
Footnotes
-
The Wall Street Journal’s reporting is summarized in secondary coverage by The Guardian and Hoka News. ↩
Tags
