Nonconsensual deepfake porn puts AI in spotlight
By Donie O'Sullivan
In its annual “worldwide threat assessment,” top US intelligence officials have warned in recent years of the threat posed by so-called deepfakes – convincing fake videos made using artificial intelligence.
“Adversaries and strategic competitors,” they warned in 2019, might use this technology “to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners...”
“I am baffled by how awful people are to each other on the Internet in a way that I don’t think they would be face to face,” Hany Farid, a professor at the University of California, Berkeley, and digital forensics expert, told CNN.
“I think we have to start sort of trying to understand, why is it that this technology, this medium, allows and brings out seemingly the worst in human nature? And if we’re going to have these technologies ingrained in our lives the way they seem to be, I think we’re going to have to start to think about how we can be better human beings with these types of devices,” he said.
Hany Farid is professor at the UC Berkeley School of Information and Departement of EECS. He specializes in digital forensics.