By Vittoria Elliott
IN MAY, A fake image of an explosion near the Pentagon went viral on Twitter. It was soon followed by images seeming to show explosions near the White House as well. Experts in mis- and disinformation quickly flagged that the images seemed to have been generated by artificial intelligence, but not before the stock market had started to dip.
It was only the latest example of how fake content can have troubling real-world effects. The boom in generative artificial intelligence has meant that tools to create fake images and videos, and pump out huge amounts of convincing text, are now freely available. Misinformation experts say we are entering a new age where distinguishing what is real from what isn’t will become increasingly difficult...
“Obviously, manipulated media is not fundamentally bad if you're making TikTok videos and they're meant to be fun and entertaining,” says Hany Farid, a professor at the UC Berkeley School of Information, who has worked with software company Adobe on its content authenticity initiative. “It's the context that is going to really matter here. That will continue to be exceedingly hard, but platforms have been struggling with these issues for the last 20 years...”
Hany Farid is a professor at the UC Berkeley School of Information and EECS. He specializes in digital forensics.