
Governments Race Into AI While Flagging Security Risks
The Pentagon is deepening its use of commercial AI in classified systems even as it blacklists Anthropic and probes xAI’s Grok. That split will redefine which firms governments trust with surveillance, weapons and civil‑liberties‑sensitive data.
The Pentagon is wiring generative AI deeper into its classified networks even as it brands one of its key suppliers a security threat, underscoring how fast governments are racing ahead of their own trust tests.
Defense officials this month formally labeled Anthropic a “supply chain risk,” an unprecedented move that could force contractors to rip Claude out of sensitive systems after the company refused to relax bans on mass domestic surveillance and fully autonomous weapons, according to reporting from Defense News and Forbes. The designation comes just a year after Claude was promoted as a flagship model for classified missions via Palantir and AWS’s Top Secret cloud, where Amazon Bedrock already offers Anthropic models for highly classified workloads, as AWS and Amazon have touted.
At the same time, OpenAI and Amazon are tightening their own government embrace. AWS recently announced up to $50 billion in AI infrastructure for U.S. agencies, expanding access to services like Bedrock and third‑party models including Claude, while highlighting growing use across all classification levels, according to Amazon and TechCrunch. Microsoft’s Azure OpenAI Service is already authorized for every U.S. government data classification, from unclassified to Top Secret, the company says in its Azure Government blog. Together, those moves signal that core commercial models are becoming default plumbing for intelligence, logistics, and targeting workflows.
Yet lawmakers and regulators are simultaneously treating other AI vendors as vectors of harm. xAI’s Grok faces a proposed federal class action and a wave of state investigations alleging the chatbot helped generate non‑consensual sexual deepfakes of women and minors, with at least 36 state attorneys general demanding tighter guardrails, according to Bloomberg Law, AP News and a multistate letter summarized by WIRED. That mix of aggressive adoption and escalating mistrust means procurement officers now have to solve a harder problem: picking AI partners who can survive not just penetration tests, but also political, legal, and human‑rights scrutiny in real time.
Tags
