BELOW SUPERNAV drop zone ⇩

Deepfakes raise alarm about AI in elections

Experts, officials and observers alike are sounding alarms about the dangers deepfakes pose for 2024 as it gets easier to use artificial intelligence (AI) and spread synthetic content that could stoke disinformation and confuse voters in a critical election year.

MAIN AREA TOP drop zone ⇩

MAIN AREA TOP drop zone ⇩

maylen

https://digital-stage.newsnationnow.com/

AUTO TEST CUSTOM HTML 20241114185800

AUTO TEST CUSTOM HTML 20241115200405

AUTO TEST CUSTOM HTML 20241118165728

AUTO TEST CUSTOM HTML 20241118184948

Experts, officials and observers alike are sounding alarms about the dangers deepfakes pose for 2024 as it gets easier to use artificial intelligence (AI) and spread synthetic content that could stoke disinformation and confuse voters in a critical election year.

Last week, a local Arizona newsletter released an AI-generated deepfake video of Senate candidate Kari Lake in order to warn readers “just how good this technology is getting.” In Georgia, lawmakers advocated for a bill that would bar deepfakes in political communications by playing clips of fabricated endorsements. 

AI is “supercharging” threats to the election system, said Nicole Schneidman, a technology policy strategist at the nonprofit watchdog group Protect Democracy. “Disinformation, voter suppression — what generative AI is really doing is making it more efficient to be able to execute such threats.” 

The advanced tech, which can generate images, audio and video and digitally alter likenesses and voices, is rapidly developing, leaving scholars and lawmakers scrambling to catch up. 

It has also left everyday voters trying to navigate an election landscape where it’s increasingly difficult to gauge the authenticity of pictures, posts and videos. 

“We’re already at the point where I don’t think voters can rely on their senses to be able to distinguish the synthetic from the authentic,” Schneidman said.

In Arizona, a Substack newsletter last week sought to highlight that “any idiot with a computer” can put together and disseminate relatively convincing deepfake content at no cost. 

“Hi, I’m Kari Lake. Subscribe to the Arizona Agenda for hard-hitting, real news and a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting,” says the face and voice of the Senate candidate in a video.  

Newsletter author Hank Stephenson asked viewers to consider whether it took “a second for your brain to catch up even after our ‘Deep Fake Kari Lake’ told you she was fake.”

A second video shows a rendering of Lake explaining how the face-swap, audio-cloning and lip-syncing technology works. 

What might have taken a studio budget and a production team to produce a few years ago can now be put together by everyday users with just a few clicks, said Barry Burden, a political science professor and director of the Elections Research Center at the University of Wisconsin-Madison. And with the ubiquity of social media platforms, fabricated content can be widely spread, with few formal checks in place. 

“As we get closer to Election Day, I think the risk becomes greater because it could influence voters or election outcomes and not be detected or corrected until after the votes have been cast and counted,” Burden said. 

The narrow window before November is also a tight timeline to push through new legislative controls.

In Georgia, the state House approved a bill that aims to crack down on “materially deceptive media,” or content that appears to depict “a real individual’s speech or conduct that did not occur in reality,” in political communication. 

Presenting the idea to the Georgia Senate last week, state Rep. Brad Thomas (R) played an audio clip that purported to show opponents of the bill switching to endorse it, and then warned that AI might be used to misrepresent officials’ positions or launch false campaign announcements. 

“Some people just learn the hard way, and I guess seeing is believing,” Thomas told The Hill, when asked about the deepfake he played for his fellow lawmakers, arguing the tech is “a big, potential destabilizer” for elections. State Rep. Colton Moore (R), whose voice was represented in the clip, has opposed the bill online as an attack on “memes” and free speech.  

“I think the demonstrations that people are doing … creating deepfakes to demonstrate the potency of deepfakes, is really helpful, because I think the public and a lot of lawmakers have not realized how fast this technology is advancing,” Burden said. 

In January, a robocall mimicked President Biden’s voice to urge thousands of New Hampshire voters not to cast their ballots in the Granite State’s primary. 

Schneidman called the New Hampshire robocall a “milestone” example of how synthetic content can be deployed for voter suppression. The Associated Press reported that the man behind the calls has claimed he was trying to warn people about AI, rather than influence the race.

In yet another example of AI’s increasing presence in the elections space, fake AI images of Black voters supporting former President Trump have circulated online as he courts the key demographic ahead of a November showdown with Biden. 

“We are entering the first-ever AI election, in which our information ecosystems are going to be swamped with fake video, images, audio, robocalls, etc. And voters are not going to know what they can trust,” said Jonathan Mehta Stein, the executive director of California Common Cause, a nonprofit watchdog organization.

But while the fake Biden robocall drew national media coverage and was swiftly debunked, Stein said he was more concerned about how AI could influence local-level governments and elections, where the same sort of call could go unchecked. 

“The power of generative AI to swing a local election or some state legislative election, I think, is really grave, particularly in an era of declining local press. And so the threat to our local democracy may be even more extreme than the threat to our national democracy,” Stein said. 

Hany Farid, a professor at the University of California, Berkeley, School of Information, said another concern is whether AI might be used as a scapegoat. 

“We can create fake content to try to harm a candidate, to try to discourage people from voting. But then, when you really do get caught in something dumb or illegal or embarrassing, you get to cry ‘fake,’ and that means there is no more reality, right? Everything is suspect now,” said Farid, who runs a project that’s tracking deepfakes in the 2024 cycle.  

“It used to be, if there’s a video of you saying something, that was that. There was no more discussion. But that’s not true anymore,” Farid said. “That is a really dangerous world we’re entering, where nobody knows what to believe anymore.” 

Activists and policymakers are closing in on AI from multiple angles, advocating for digital literacy and pushing forward legislation and guidelines. 

Biden issued an executive order on AI last year that included plans to develop guidelines on content authentication and watermarking, and the Federal Communications Commission moved last month to target AI-generated robocalls after the New Hampshire incident. A group of tech companies last month pledged to combat AI in this year’s elections. 

The California Initiative for Technology and Democracy, a project of Common Cause, is sponsoring a package of state-level bills, including plans to require social media platforms to label deepfakes, that the group hopes inspires similar action nationwide. In Wisconsin, a law passed just last week requiring a disclaimer on political ads that use AI in the state.

Experts stressed that though AI can be used for malign purposes, the tool itself isn’t necessarily harmful — and may even be a benefit for campaigns crafting messaging or developing content. 

Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, said that “the idea is: address harms, not technology.” 

But despite experts’ alarm bells, Schneidman underscored that AI concerns shouldn’t be extrapolated into widespread worries about the U.S. election system. 

“Even as voters should be aware of the advent of generative AI and the fact that they will likely encounter, this cycle, synthetic content related to the election, they should not call into question the integrity of election administration in this country,” she said, pointing voters to their local election officials for information before casting their ballots.

The Hill on NewsNation

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed

 

MAIN AREA MIDDLE drop zone ⇩

Trending on NewsNation

AUTO TEST CUSTOM HTML 20241119133138

MAIN AREA BOTTOM drop zone ⇩

tt

KC Chiefs parade shooting: 1 dead, 21 shot including 9 kids | Morning in America

Witness of Chiefs parade shooting describes suspect | Banfield

Kansas City Chiefs parade shooting: Mom of 2 dead, over 20 shot | Banfield

WWE star Ashley Massaro 'threatened' by board to keep quiet about alleged rape: Friend | Banfield

Friend of WWE star: Ashley Massaro 'spent hours' sobbing after alleged rape | Banfield

Sunny

la

69°F Sunny Feels like 69°
Wind
1 mph SSW
Humidity
26%
Sunrise
Sunset

Tonight

A few passing clouds. Low 46F. Winds light and variable.
46°F A few passing clouds. Low 46F. Winds light and variable.
Wind
2 mph N
Precip
9%
Sunset
Moon Phase
Waning Gibbous