
AI Tactics Race Ahead of Election Laws Worldwide
From deepfake robocalls in the US to synthetic campaign images in New Zealand, AI is reshaping elections faster than regulators can respond.
Synthetic voices, cloned faces and auto‑generated campaign messages are already flowing into election cycles across the world. But the legal systems meant to safeguard fair contests still largely treat this as tomorrow’s problem, even as major national votes in 2026 draw closer.
In the United States, the danger moved from hypothetical to concrete in January 2024, when thousands of New Hampshire voters received robocalls using an AI‑generated clone of President Joe Biden’s voice, urging Democrats not to vote in the state’s primary. State investigators traced the operation to a political consultant and Texas telecom firms, and the US Federal Communications Commission (FCC) responded by declaring AI‑generated voice cloning in robocalls illegal under existing consumer protection law. The FCC later proposed a multimillion‑dollar fine against the consultant involved. Yet the episode arrived only days before voting began, illustrating how quickly AI tools can be weaponised under current rules.
Other democracies are seeing similar pressure points. In Australia, independent senator David Pocock released fabricated videos of the prime minister and opposition leader in 2024 to demonstrate how easily generative AI can impersonate political figures; he then called for a ban on AI deepfakes in election campaigns, noting there were effectively no laws stopping such material being produced. In Queensland, Premier Steven Miles condemned an AI‑generated TikTok attack ad created by the opposition and pledged not to use synthetic media in his own campaign, even though state electoral rules do not specifically address AI.
Europe is ahead on paper but not yet in practice. The EU’s Artificial Intelligence Act and Digital Services Act (DSA) create transparency and risk‑management duties for big platforms and AI providers, including requirements to label manipulated or AI‑generated content and assess systemic risks to elections and civic discourse. But many of these obligations are still being phased in, and enforcement remains patchy. The European Commission recently found TikTok in preliminary breach of DSA ad‑transparency rules, partly because the platform failed to maintain a usable repository of political advertising, a tool regulators see as essential for tracking misinformation and covert influence around elections.
New Zealand has emerged as a case study in how fast AI can seep into campaigns while electoral law remains stuck in a pre‑generative‑AI era. An October 2023 controversy saw the ACT Party use AI‑generated campaign imagery on social media without disclosing it, including synthetic images depicting ethnic minorities. Academics at the University of Waikato warned that the lack of transparency and the potential for harmful stereotypes underscored a need for explicit rules covering AI in political advertising.
Those rules still do not exist. New Zealand’s Electoral Act regulates “election advertisements” across media and requires promoter statements and spending caps. But as legal scholars Andrew Lensen and Andrew Geddis note, it largely ignores content standards: there is no duty to disclose AI use, and no general ban on misleading or even outright false election ads. Only narrow provisions apply in the final days before polling, and an antique offence of “undue influence” has rarely been tested against digital deception.
Parties are beginning to fill the gap with voluntary norms. In February 2026, New Zealand’s Labour Party told 1News it would avoid AI‑generated visuals altogether and urged cross‑party transparency about synthetic content. The Green Party pledged not to use generative AI in its 2026 campaign on ethical and environmental grounds, while New Zealand First said it might use AI in limited, clearly labelled creative visuals but not for scripts or strategy. Te Pāti Māori has warned that AI‑driven manipulation and harassment are already damaging for Māori communities.
Globally, researchers and regulators worry less about a single spectacular deepfake than about a steady drip of AI‑assisted “slop” that floods feeds with cheap, targeted and often unverifiable political messaging. Legal scholars in New Zealand have pointed to studies showing that people are more likely to infer guilt when shown AI‑generated images linked to alleged crimes, even when explicitly told the images are fabricated. Combined with micro‑targeting, that bias could allow campaigns or third‑party actors to tailor deceptive narratives to specific demographics in near real time.
Some jurisdictions are now trying to move faster. A growing number of US states have passed or proposed rules on AI in political ads, typically requiring disclosure labels or banning deceptive deepfakes close to election day. The European framework, once fully in force, will obligate very large platforms to build watermarking, provenance and detection systems for synthetic media, especially in the context of elections. But those measures will arrive after several key national votes, including in countries where trust in institutions is already fragile.
The result is a narrowing window. Campaigns, consultants and opportunistic actors can already access powerful text, image and voice generators at minimal cost, while election watchdogs are working with statutes drafted for leaflets, radio spots and human phone‑bankers. Without faster, coordinated action on transparency, content standards and platform accountability, societies risk discovering too late how effectively AI‑driven persuasion can bend perceptions during the few critical weeks when voters make up their minds.
Tags