From Los Angeles Times
By Evan Halper
As the 2020 election approaches, election officials and social media firms are preparing for a potent weapon of disinformation: doctored videos, known as “deep fakes,” that can be nearly impossible to detect as inauthentic. Technologists on the front lines of cybersecurity are increasingly worried over the threat as the technology to make convincing fakes becomes more widely available.
Leaders in artificial intelligence have unveiled a new tool to counter this issue. The solution includes a scanning software that UC Berkeley has been developing in partnership with the U.S. military that will be given to journalists and political operatives. The end goal is simple: give the media and campaigns a chance to screen possible fake videos before throwing an election into chaos.
“We have to get serious about this,” said Hany Farid, a computer science professor at UC Berkeley working with a San Francisco nonprofit called the AI Foundation to confront the threat of deep fakes.
“Given what we have already seen with interference, it does not take a stretch of imagination to see how easy it would be,” Farid added. “There is real power in video imagery.”
Hany Farid is a professor at the UC Berkeley School of Information and EECS. He specializes in digital forensics.