AI deepfakes blur reality in 2026 US midterm campaigns

Realistic videos, among "deepfake" advertisements that campaigns are deploying ahead of November's elections

PHOTO: X

As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming.

"Radicalised white men are the greatest domestic terrorist threat in our country," the US Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true."

But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago. The words "AI-generated" show up in easy-to-miss font in the lower right-hand corner.

The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace.

The ads are being introduced into a media landscape with few guardrails. There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws. And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact‑checking systems in favour of user-generated notes.

Politics experts worry such videos could leave voters confused, or even deceived. The stakes are high: the election will determine which party controls Congress for the final two years of Republican President Donald Trump's term, with Democrats seemingly well positioned to capture a majority in the US House of Representatives but facing longer odds in the US Senate.

The ads appear to be effective, political strategists and experts said. One 2025 study, published in the peer-reviewed Journal of Creative Communications, found that people struggle to identify deepfake videos and that their opinions are affected by this type of misinformation.

So far, Republicans appear to be utilising the technology more frequently than Democrats this election cycle, according to politics experts and a Reuters review of publicly available ads.

The Republicans are following the lead of Trump's White House, which has released scores of AI-generated videos and gaming-inspired memes on social media that do everything from disparaging protesters to hyping up the Iran war.

Read: Trump accuses Iran of using AI to spread disinformation

The Talarico ad, for instance, is one of three recent ads created by national Republicans that use deepfake technology – realistic yet fabricated videos made by AI algorithms that have become increasingly easy to create.

NRSC Communications Director Joanna Rodriguez defended the ad in a statement to Reuters, saying Democrats were "panicking after seeing and hearing James Talarico's own words."

JT Ennis, a spokesperson for Talarico's campaign, said that while his opponents "spend their time making deepfake videos to mislead Texans, we are uniting the people of Texas to win in November."

Among Democrats, the most notable user of AI-generated videos is California Governor Gavin Newsom, a potential 2028 presidential candidate who has frequently employed deepfake videos to troll Trump. But the Democratic Party's national campaign committees have not yet sought to mirror the NRSC's efforts in midterm campaigns.

The campaign of Republican US Representative Mike Collins of Georgia, who is vying to challenge Democratic Senator Jon Ossoff in November, created a deepfake video in which Ossoff appears to say, “I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn’t know. I’ve only seen a farm on Instagram.”

In a statement, Collins’ campaign spokesperson said that as technology evolves, the campaign “will be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage and deliver our message directly to voters.”

A spokesperson for Ossoff’s campaign declined to comment on the ad. Days after the video ran, the campaign said "yes" when asked by the Atlanta Journal-Constitution if it would “commit to not using deepfakes that misattribute or fabricate words or actions of their opponents to mislead voters.”

Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, said the growing use of political content that spreads misinformation risks further eroding US voter trust in institutions.

"I think that the types of damage that we can do to the rigour and credibility of elections and democratic systems – and the ability to misinform people about candidates or social issues – very much risks being supercharged," he said.

Still, political strategists say AI-generated videos can be persuasive as well as time- and cost-effective, though they stressed that they need to be used ethically. The technology can be a tool for political satire in a visual format that lends itself to watching and sharing on social media.

With essentially no federal regulation in place, states have been playing catch-up. Twenty-eight states have passed legislation addressing the use of AI in political ads, with most focused on disclosure rather than an outright ban, according to Ilana Beller, who leads state legislative work on AI at the liberal consumer advocacy group Public Citizen.

But those laws face limits. Many only apply to political campaigns rather than social media users who might spread AI-infused misinformation. Research also suggests that disclaimers are not effective in preventing voters from being persuaded by false ads, Schiff noted.

AI technology is inexpensive and accessible enough that down-ballot candidates and local political groups are using it, said Brady Smith, a national Republican political strategist.

Read More: Iran deploys AI satire to mock Trump, Netanyahu as online propaganda war escalates

For example, in February, the Republican Committee for Loudoun County in northern Virginia released three AI-generated ads attacking Democratic Governor Abigail Spanberger, who took office in January.

One video showed footage of Spanberger’s response to Trump’s State of the Union address, interspersed with AI-generated video of her appearing to say things like “working hard to bring in commie socialist Marxism, free stuff for illegals, gun grabs and erasing gender norms.”

A spokesperson for Spanberger declined to comment. A representative for the Loudoun County Republican Committee did not reply to a request for comment.

Other videos are more obviously fake. An ad for Republican Texas Attorney General Ken Paxton’s primary campaign against Senator John Cornyn shows an AI-generated version of Cornyn dancing with Democratic Representative Jasmine Crockett, as a narrator says: “Publicly, they’re opponents. Privately, they’re perfectly in step.”

A disclosure in small font appears at the end, stating some AI-generated content “is satire that does not represent real events.”

Cornyn’s campaign responded by releasing an AI-generated ad of Paxton driving a convertible with women depicted as “Mistress #1” and “Mistress #2”, highlighting allegations of infidelity that have dogged the attorney general during his run.

Spokespeople for Paxton and Cornyn’s campaigns did not respond to requests for comment.

The exchange reflects how quickly AI-generated attacks are becoming part of routine campaign messaging, despite concerns about their impact on the electoral system.

"It's harmful for politicians and campaigns to continue normalising this," Schiff said.

Load Next Story