
AI Goes to War as Companies Clash With the Pentagon
AI is moving from office automation into the heart of war planning, targeting and procurement battles. Palantir’s war-planning chatbot, a deadly Iran school strike and Anthropic’s clash with the Pentagon show how military power and AI governance are colliding.
AI systems are no longer just automating office work; they are now proposing targets, sequencing missile strikes and sitting at the center of an unprecedented legal fight between a leading AI lab and the U.S. military. From Palantir’s chatbot that drafts war plans to Anthropic’s lawsuit over a Pentagon “supply-chain risk” label, national security has become one of the most contested frontiers for artificial intelligence.
The shift is forcing governments, companies and civil society to confront who is accountable when an algorithm helps choose a target—or when a contractor refuses to let its model do so.
Chatbots that sketch war plans
In 2023, Palantir unveiled its Artificial Intelligence Platform (AIP) for defense, showing a large language model–style interface that lets an operator ask a chatbot to identify enemy assets, recommend response options and generate an operational plan, including jamming communications and ordering drone strikes. In the company’s own demo and subsequent coverage, the human “in the loop” is mostly there to approve or reject AI-suggested actions, not to design the plan from scratch. Palantir’s description of AIP and independent analysis by the Bulletin of the Atomic Scientists underline how easily complex targeting workflows can be wrapped in a conversational interface.
Palantir says the software will not execute targets autonomously and insists human commanders retain control. Yet as its AI-enabled tooling is woven into programs like Project Maven—an intelligence and targeting initiative now using generative models for planning and decision support—the risk is that militaries lean on chatbots for speed in chaotic environments where data is incomplete and political constraints are fluid. OECD incident reporting on Palantir and reporting on Maven’s evolution show how quickly these systems are moving from demo to deployment.
Lawmakers probe AI’s role in a deadly school strike
Those concerns turned urgent after a February 28, 2026 strike on a girls’ school complex in Minab, Iran, killed at least 175 people, many of them children. A reconstruction by The Washington Post found that a U.S. target list may have misclassified the site as a military facility as the U.S.-Israeli war against Iran escalated.
In response, more than 40 Democratic senators and representatives led by Michael Bennet demanded answers from the Pentagon, explicitly asking what role “artificial intelligence or machine-learning tools” played in assembling the target list, whether a no‑strike list was used, and how often AI recommendations can override human judgment. Their March 11 letter, published by Sen. Bennet’s office, signals growing unease in Congress about opaque AI-assisted targeting processes that can kill civilians before lawmakers even know such tools were in the loop.
Anthropic vs. the Pentagon: a supply‑chain risk fight
At the same time, Anthropic—maker of the Claude AI models—is in an open legal battle with the U.S. Defense Department over how far a commercial lab must go in enabling military uses of its systems. After months of tense negotiations over a July 2025 classified‑systems contract, Defense Secretary Pete Hegseth in late February 2026 formally designated Anthropic a “supply‑chain risk,” a label historically reserved for foreign adversaries and now applied to an American AI company for the first time. Legal analyses from firms like Mayer Brown note that the designation forces Pentagon contractors to certify they are not using Anthropic’s models in work touching national security.
Anthropic responded on March 9 with a federal lawsuit arguing the designation is unlawful and retaliatory, saying the government is punishing it for refusing “unrestricted” military use of Claude, including for fully autonomous weapons and mass surveillance. Coverage by TechCrunch and AP News underscores how extraordinary the case is: Microsoft and retired senior military officers have backed Anthropic in court, warning that weaponizing a procurement tool against a domestic AI vendor could chill dissent and distort the defense AI market.
A new battleground over accountability and control
Taken together, Palantir’s demos, the Iran school strike probe and the Anthropic fight sketch a world where AI is deeply embedded in war-making but the lines of accountability are unresolved. Commercial models are drafting plans and fusing intel; militaries are turning to AI‑augmented target lists; and companies that try to limit how their models are used can find themselves accused of endangering national security.
Researchers have long warned that lethal decision‑making in war is too messy and political to be safely delegated to systems optimized for pattern‑matching on past data. Analyses from the Bulletin of the Atomic Scientists and recent policy work on military AI governance, such as papers archived on arXiv, argue for binding rules that keep humans meaningfully in control and subject to law.
The current disputes make clear that voluntary principles and internal safety policies are not enough. As AI becomes a de facto combatant—drafting options, ranking targets, filtering intelligence—the fight over who gets to switch it on, and for what, is set to reshape both the tech industry and the laws of war.
Tags
