Aug 17, 2023

Deceptive Mirage: Unraveling the Threat of Election Deepfakes

‘Tis the season for political pundits, patriotic advertisements, presidential debates…and deepfakes?!

If you’ve been online in the past year, chances are you’ve witnessed the meteoric rise of generative AI. From ChatGPT to DALL-E, AI has infiltrated every aspect of our digital lives. 

As a tool, AI can act as an outlet for creative expression or information gathering; as a weapon, it can distort reality and spread mis- and disinformation to the masses. Earlier this year, a picture of the Pentagon in flames, which was later identified as being AI-generated, went viral on Twitter leading to a half-trillion dollar drop in the stock market. More recently, Republican presidential candidate Ron DeSantis featured a series of doctored images of former President Donald Trump intimately embracing Dr. Anthony Fauci in a PAC-sponsored ad. 

“This is not the first use of generative AI in the upcoming election, and it certainly won’t be the last,” said UC Berkeley School of Information Professor Hany Farid in an interview with CNN. “These are threats to our very democracies,” he told Forbes

It’s not new that politicians are going to lie to you or the voters. It’s not new that we are going to distort reality. But what is new is the democratized access to technology that allows anyone…to create images, audio, and video that are incredibly realistic.

The upcoming election has indeed already seen more than its fair share of artificial content, with major players such as the Republican National Committee and Trump’s campaign team putting out their own advertisements using voice cloning technology and AI to generate images and videos. 

In fact, Farid and his students at UC Berkeley, concerned about how deepfakes are weaponized in politics, are maintaining a site cataloging known examples of deepfakes in the upcoming 2024 presidential election.

“It’s not new that politicians are going to lie to you or the voters. It’s not new that we are going to distort reality. But what is new is the democratized access to technology that allows anyone…to create images, audio, and video that are incredibly realistic,” added Farid in an interview with CNBC’s Squawk Box

As a result of this access, the “realistic” nature of AI-generated content blurs the line between real and fake, leading to what’s known as the “liar’s dividend,” wherein if anything can be faked, then then nothing has to be real. The existence of synthetic material can sow doubt about the authenticity of a piece of content, allowing people to claim genuine content is fake or bad. “When we entered this age of deepfakes,” Farid explained to NPR, “anybody can deny reality.”

In order to combat the influx of fake AI-generated content, Farid advocates for regulation by AI companies, social media sites, and election campaigns themselves. He believes that using tools such as watermarks, cryptographic signatures, and fingermarks can make it easier to identify the real from the fake. Projects like the Content Authenticity Initiative are crucial to helping prioritize and standardize the authentication process. 

At times like this, seeing is no longer believing. So as the election slowly approaches, it’s about time to start paying attention…or you might just be deceived by a deepfake.

headshot of Hany Farid
Hany Farid (Photo by Brittany Hosea-Small for the School of Information)
farid election deepfake website
Farid maintains a website where he and his team tracks various uses of deepfakes in election materials

Last updated:

August 22, 2023