Apr 5, 2021

“What? So What? Now What?” Episode 3: A Video on Deepfakes featuring Prof. Hany Farid

The Center for Long-Term Cybersecurity has produced an animated “explainer” video about deepfakes and misinformation, featuring perspectives from Dr. Hany Farid, Head of School and Associate Dean of the UC Berkeley School of Information and a Senior Faculty Advisor for the Center for Long-Term Cybersecurity.

Produced as part of the “What? Now What? So What?” explainer video series, this short video provides an overview of what deepfakes are, why they matter, and what can be done to mitigate potential risks associated with “fake” content.

“Deepfake is a general term that encompasses synthesized content,” Professor Farid explains. “That content can be text, it can be images, it can be audio, or it could be video. And it is synthesized by an AI or machine learning algorithm to, for example, create an article by a computer, just given a headline. Create an image of a person who doesn’t exist. Synthesize audio of another person’s speech. Or make somebody say and do something in a video that they never said.”

“Deepfake is a general term that encompasses synthesized content. That content can be text, it can be images, it can be audio, or it could be video. And it is synthesized by an AI or machine learning algorithm.”
— Hany Farid

As Farid notes, deepfakes are potentially dangerous in part because they make way for the so-called “liar’s dividend.” In a world in which everything can be faked, nothing has to be accepted as real anymore, giving plausible deniability to anything caught on video.

“What happens when we enter a world where we can’t believe anything?” Farid says. “Anything can be faked. The news story, the image, the audio, the video. In that world, nothing has to be real. Everybody has plausible deniability. This is a new type of security problem, which is sort of information security. How do we trust the information that we are seeing, reading, and listening to on a daily basis?”

To address the challenge of deepfakes, Farid suggests that new regulation may be necessary to encourage content publishers to help screen or identify synthetic media. “Our regulators have to start getting smart about how to have modest regulation that will require the social media companies and the internet companies of the world to do better dealing with the harms that are coming from their services.”

He also suggests that the advertising-driven business model that drives these companies is part of the issue, as it encourages sharing through outrage and clickbait, whether content is authentic or accurate or not. “When you are in the pure engagement driving business, this is the inevitable outcome,” Farid says.

Technology solutions will also be important, Farid says. “We have to innovate in this space,” he says. “We have been developing technologies that can look at videos and audio and determine with a reasonable degree of uncertainty whether they have been deepfaked or manipulated or not.”

Ultimately, it is up to users to become better about being on the lookout for deepfakes and other misinformation. “We have to change our behavior, we have to change our pattern, and we have to become better digital citizens,” Farid says. “If you are going to share something that seems particularly outrageous and particularly unlikely, take the breath. This is really the best advice.”


Watch other videos in the “What? So What? Now What” series, including videos on adversarial machine learning and differential privacy.

Videos

Deepfakes: What? So What? Now What?

Deepfakes: What? So What? Now What?

If you require video captions for accessibility and this video does not have captions, click here to request video captioning.

Last updated:

April 5, 2021