From NPR All Things Considered
As Iran and Israel fought, people turned to AI for facts. They didn’t find many
By Huo Jingnan and Lisa Hagen
In the first days after Israel’s surprise airstrikes on Iran, a video began circulating on X. A newscast, narrated in Azeri, shows drone footage of a bombed-out airport. The video has received almost 7 million views on X.
Hundreds of users tagged X’s integrated AI bot Grok to ask: Is this real?
It’s not — the video was created with generative AI. But Grok’s responses varied wildly, sometimes minute to minute. “The video likely shows real damage,” said one response; “likely not authentic,” said another...
“I don’t know why I have to tell people this, but you don’t get reliable information on social media or an AI bot,” said Hany Farid, a professor who specializes in media forensics at the University of California, Berkeley.
Farid, who pioneered techniques to detect digital synthetic media, warned against casually using chatbots to verify the authenticity of an image or video. “If you don't know when it’s good and when it’s not good and how to counterbalance that with more classical forensic techniques, you’re just asking to be lied to.”
He has used some of these chatbots in his work. “It’s actually good at object recognition and pattern recognition,” Farid said, noting that chatbots can analyze the style of buildings and type of cars typical to a place...
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley.