By Hany Farid
Rumors quickly spread in Trent, Italy that members of the Jewish community murdered a young boy and drained and drank his blood to celebrate Passover. Before long, the city’s entire Jewish community is arrested and tortured, while 15 are found guilty and executed. The year was 1475.
Fast forward to 2018. Rumors quickly spread in Athimoor-Kaliyam, India that roving gangs are kidnapping children. Over a period of several months, nearly two dozen innocent people are dragged from their vehicles and killed. The rumors this time spread through WhatsApp instead of word of mouth.
Fake news is not new, nor are its deadly consequences. What is new, thanks to the internet and social media, is their reach and frequency. Today, misinformation propagates around the world at the speed of light. From small- to large-scale fraud, to sowing civil unrest, interfering with democratic elections, and inciting violence, misinformation campaigns today are leading to dangerous and deadly outcomes.
Add to this phenomenon the ability to create increasingly more compelling and sophisticated fake videos of anybody saying and doing anything, and the threat only increases. This is the landscape that awaits us in 2019 and beyond.
Advances in artificial intelligence have led to computer systems that are able to synthesize images of people who don’t exist, videos of people doing things they never did, and audio recordings of them saying things they never said.
These so-called “deepfakes” are a dangerous addition to an already volatile online world in which rumors, conspiracies, and mis-information spread often and quickly. By providing millions of images of people to a machine-learning system, the system can learn to synthesize realistic images of people who don’t exist.
It is likely that we have already seen the first seemingly nefarious use of this technology to create a fraudulent identity. Similar technologies can, in live-stream videos, convert an adult face into a child’s face, raising concerns that this technology will be used by child predators.With just hundreds of images of someone, a machine-learning system can learn to insert them into any video.
This face-swap deepfake can be highly entertaining, as in its use to insert Nicolas Cage into movies in which he never appeared. The same technology, however, can also be used to create non-consensual pornography or to impersonate a world leader.
Similar technologies can also be used to alter a video to make a person’s mouth consistent with a new audio recording of them saying something that they never said. When paired with highly realistic voice synthesis technologies, these lip-sync deepfakes can make a CEO announce that their profits are down, leading to global stock manipulation; a world-leader announce military action, leading to global conflict; or a presidential candidate confess complicity in a crime, leading to the disruption of an election.
What is perhaps most alarming about these deepfake technologies is that they are not only in the hands of sophisticated Hollywood studios. Software to generate fake content is widely and freely available online, putting in the hands of many the ability to create increasingly compelling and sophisticated fakes.
Coupled with the speed and reach of social media, convincing fake content can instantaneously reach millions. How do we manage a digital landscape when it becomes increasingly more difficult to believe not just what we read, but also what we see and hear with our own eyes and ears? How do we manage a digital landscape where if anything can be fake, then everyone has plausible deniability to claim that any digital evidence is fake?
To begin, the major social media platforms must more aggressively and proactively deploy technologies to combat mis-information campaigns, and more aggressively and consistently enforce their policies. For example, Facebook’s terms of service state that users may not use their products to share anything that is “unlawful, misleading, discriminatory or fraudulent.” This is a sensible policy — Facebook now needs to enforce their rules.
Second, researchers who are developing technologies that we now know can be weaponized should give more thought to how they can put proper safeguards in place so that their technologies are not misused.
Third, researchers need to continue to develop and deploy technologies to detect deepfakes. This includes technologies to detect fakes at the point of upload as well as control-capture technologies that can authenticate content at the point of recording.
Fourth, following House Intelligence Committee hearings, Congress should continue to ensure that they understand the quickly advancing deepfake technology and its potential threat to our society, democracy, and national security.
And lastly, we each have to become better digital citizens. We have to move from the Information Age to the Knowledge Age. This will require us all to learn how and where to consume more trustworthy information, how to distinguish the real from the fake, and how to interact more respectfully with each other online, even with those with whom we disagree.
Originally published as “Hany Farid: Deepfakes give new meaning to the concept of ‘fake news,’ and they’re here to stay” by Fox News on June 16, 2019. Reprinted with the author’s permission.
Hany Farid is a professor at the UC Berkeley School of Information and EECS specializing in digital forensics.