
OpenAI, the Pentagon, and AI’s New Security Role
As OpenAI edges closer to the Pentagon, frontier AI labs are being treated as de facto national‑security infrastructure—without matching oversight, rules, or democratic control.
OpenAI’s quiet decision to allow some military applications of its models, combined with the Pentagon’s rapid embrace of generative AI, is pushing commercial labs into the heart of national security faster than governance can keep up. What began as consumer-facing chatbots is evolving into infrastructure that could shape surveillance, targeting decisions, and the tempo of war.
In January 2024, OpenAI removed an explicit ban on “military and warfare” from its usage policies, while retaining prohibitions on weapons development and direct physical harm. TechCrunch and CNBC reported that the change was made to accommodate “national security” customers, with the company confirming it was already working with DARPA on cybersecurity tools.【0search1】【0search0】 That move marked a symbolic break from Silicon Valley’s earlier reluctance to touch military AI projects, and it signaled to Washington that frontier models were now on the menu for defense.
The Pentagon, meanwhile, is racing to operationalize generative AI across the force. In 2025 it signed a contract known as “Thunderforge” with Scale AI to help commanders plan movements of ships, aircraft, and other assets, using large models to draft orders, summarize intelligence and generate auditable recommendations.【0news13】 The Defense Health Agency has cataloged more than 120 AI projects in military medicine alone, including multiple generative systems for training and clinical support, and has begun red-teaming popular large language models for vulnerabilities before deployment in care settings.【0search2】
Those projects remain a step removed from fully autonomous weapons. DoD’s own AI assurance researchers stress that systems should support, not replace, human decision-makers, and have proposed elaborate risk management frameworks to keep AI-enabled platforms within acceptable bounds.【0academia18】 But as customers like the Pentagon, DARPA and combatant commands integrate commercial models into command-and-control and logistics pipelines, the status of companies such as OpenAI, Anthropic and Google DeepMind begins to resemble that of defense primes or even critical infrastructure providers.
Policy is shifting to match. Biden’s 2023 AI executive order, which instructed agencies to mitigate AI-enabled threats to civil liberties and national security, was rescinded in early 2025 by President Trump, who has instead emphasized removing “barriers” to U.S. AI leadership and ideological constraints on federal AI tools.【0search19】【0news15】 At the Commerce Department, the AI Safety Institute has been rebranded as the Center for AI Standards and Innovation, with an explicit focus on national security threats, cyber and bio risks, and global AI competition.【0news14】
The result is a subtle but important reframing: AI is no longer primarily a consumer technology that happens to have security implications; it is a strategic asset the government expects to weaponize in the broad sense of gaining advantage over adversaries. Public guidance from agencies like the Government Accountability Office now treats commercial models as building blocks for federal systems, documenting development and testing practices while acknowledging persistent vulnerabilities and limits of current safety evaluations.【0search6】
Inside the Pentagon, this accelerates demand for flexible contracts. Defense officials and industry lawyers describe a growing push for clauses that allow models to be used for “any lawful use,” language designed to avoid hard-coded bans that could become operational constraints as conflicts evolve. Civil-society groups and some lab insiders warn that such formulations are too broad for technology capable of assisting with target selection, information operations or mass surveillance, even if firms maintain formal prohibitions on direct weapons control or “communications surveillance.”
OpenAI’s current rules, for example, still forbid using its tools to harm people, develop weapons or conduct communications surveillance, and the company insists that national-security collaborations will remain aligned with its stated mission.【0search1】【0search0】 Yet the removal of a categorical military ban, coupled with classified or sensitive partnerships that are only partially disclosed, makes it difficult for the public to know where lines are actually being drawn.
This opacity is colliding with a broader governance gap. While the U.S. now treats leading AI capabilities as central to geopolitical competition, labs developing those capabilities remain private companies with corporate boards, profit motives and limited security clearances. They are not regulated like nuclear facilities or telecommunications backbones, even as their systems move closer to battlefield relevance. Nor is there a settled answer to who ultimately sets red lines when military utility conflicts with platform policies: a cabinet secretary, a contract officer, a lab’s trust-and-safety team, or a CEO under shareholder pressure.
For now, the convergence of OpenAI and the Pentagon is proceeding through pilots, prototypes and standards work rather than fully autonomous kill chains. But the direction of travel is clear. As generative models become embedded in planning, intelligence and medical workflows, they will be treated—and targeted—as national-security infrastructure. That raises urgent questions not just about reliability and resilience, but about democratic control: who gets to decide how far AI goes into the machinery of war, and on whose behalf those decisions are made.
Tags