
State AI crackdowns collide with Washington power plays
States from Colorado to California are racing to restrict high‑risk AI, especially in insurance, while national leaders move to rein in state authority, setting up a major battle over who writes the rules.
Lawmakers in red and blue states are racing to put guardrails on artificial intelligence in critical sectors like insurance — just as national Republicans and the White House explore ways to stop them. The clash is turning AI policy from a niche regulatory project into a live fight over federalism, industry power and how quickly protections reach consumers.
Over the past three years, states have moved much faster than Congress on AI. Colorado’s SB21‑169, enacted in 2021, bars insurers from using external consumer data, algorithms or predictive models in ways that unfairly discriminate on the basis of race and other protected traits, and forces companies to prove their models’ impacts through testing and risk‑management frameworks. (leg.colorado.gov) California followed with a suite of AI laws, including AB 2013, which from January 2026 will require developers of generative AI systems to publicly disclose information about the data used to train their models. (en.wikipedia.org) New York and California have since gone further, passing frontier‑model safety and transparency laws that put direct obligations on large AI developers rather than just users. (en.wikipedia.org)
Insurance regulators across the country are also tightening expectations around algorithmic underwriting. At least a dozen jurisdictions — including Alaska, California, Colorado, Connecticut, Illinois, Iowa and Kentucky — have adopted or closely mirrored the National Association of Insurance Commissioners’ model bulletin on AI. The bulletin pushes insurers to document how they use AI, maintain governance programs and test systems for unfair bias. (bipc.com) That effort has drawn support from both Republican and Democratic insurance commissioners, reflecting rare cross‑party concern that opaque models could drive hidden discrimination in pricing and claims.
Yet even as state momentum builds, national political actors are trying to rein it in. House Republicans last year tucked a 10‑year ban on state and local AI regulation into a flagship economic package, arguing that only Washington can prevent a patchwork of conflicting rules that might stifle innovation. The measure drew support from major tech firms seeking a single federal standard, but also sharp criticism from state officials and consumer advocates who called it a power grab that would delay meaningful protections. (apnews.com) The provision ultimately stalled in the Senate, where members of both parties raised concerns about federal overreach.
The idea of curbing state AI authority has not gone away. A Republican‑led push in the Senate to temporarily bar states from passing AI rules was removed from a budget bill only after an unusual alliance of conservative populists and Democrats rebelled, with critics warning it would tie states’ hands on issues like deepfakes and child safety. (theverge.com) And in November, Politico reported on a draft White House executive order that would direct the Justice Department and federal agencies to challenge or override state AI laws deemed overly burdensome, including by invoking interstate‑commerce arguments and conditioning some federal funds. (politico.com) The administration has called such reports speculative, but the episode underscored how central preemption has become in AI lobbying.
State leaders are pushing back. The National Conference of State Legislatures, a bipartisan body representing every state legislature, sent congressional leaders a letter in October 2025 “reaffirming” its opposition to any federal proposal that preempts state authority over AI. The group argued that states are acting as “laboratories of democracy” on issues ranging from algorithmic bias in health and insurance to children’s online safety, and warned that sweeping preemption would undercut both innovation and consumer protection. (ncsl.org) Attorneys general from both parties have also championed state‑level rules on deepfakes and misuse of biometric data, positioning themselves as front‑line AI enforcers if federal efforts stall.
The stakes go well beyond turf wars. If Congress or the White House succeed in broadly preempting state AI laws without putting strong national standards in place, industries such as insurance, banking and health care could face fewer near‑term constraints on how they deploy AI — but at the cost of leaving consumers with patchier protections and fewer local avenues for redress. Conversely, if states continue to legislate aggressively, companies may confront a fragmented compliance landscape, with divergent disclosure and safety obligations that reshape where and how they roll out AI‑driven services.
For now, businesses and civil‑society groups are operating in limbo: state requirements are multiplying, federal rules remain largely aspirational, and the battle over who controls AI governance is only intensifying. As more sectors adopt algorithmic decision‑making — from pricing loans and insurance policies to screening tenants and job applicants — the outcome of that fight will determine not just the speed of AI deployment, but who gets to decide when powerful systems cross the line from innovation to abuse.
Tags