By Irene Benedicto
Earlier this year, on the eve of Chicago’s mayoral election, a video of moderate Democrat candidate Paul Vallas appeared online. Tweeted by "Chicago Lakefront News," it appeared to showcase Vallas railing against lawlessness in Chicago and suggesting that there was a time when "no one would bat an eye" at fatal police shootings.
The video, which appeared authentic, was widely shared before the Vallas campaign denounced it as an AI-generated fake and the two-day old Twitter account that posted it disappeared. And while it's impossible to say if it had any impact on Vallas’s loss against progressive Brandon Johnson, a former teacher and union organizer, it is a lower stakes glimpse at the high stakes AI deceptions that will potentially muddy the public discourse during the upcoming presidential election. And it raises a key question: How will platforms like Facebook and Twitter mitigate them?
That's a daunting challenge. With no actual laws to regulate how AI can be used in political campaigns, it is on the platforms to determine what deep fakes users will see on their feeds, and right now, most are struggling to address how to self-regulate. “These are threats to our very democracies,” Hany Farid, an electrical engineering and computer science professor at UC Berkeley, told Forbes. “I don't see the platforms taking this seriously..."
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley and a senior advisor to the Counter Extremism Project.