Feb 17, 2026

Scientific American Asks Hany Farid Questions About What It Will Take To Rebuild Trust in the Deepfake Era

From Scientific American

A deepfake can ruin you before breakfast

By Eric Sullivan

Deepfakes first spread as a tool of a specific and devastating kind of abuse: nonconsensual sexual imagery. Early iterations often were technically crude, with obvious doctoring or voices that didn’t quite sound real. What’s changed is the engine behind them. Generative artificial intelligence has made convincing imitation faster and cheaper to create and vastly easier to scale—turning what once took time, skill and specialized tools into something that can be produced on demand. Today’s deepfakes have seeped into the background of modern life: a scammer’s shortcut, a social media weapon, a video-call body double borrowing someone else’s authority. Deception has become a consumer feature, capable of mimicking a child’s voice on a 2 A.M. phone call before a parent is even fully awake. In this environment, speed is the point: by the time a fake is disproved, the damage is already done.

Hany Farid, a digital forensics researcher at the University of California, Berkeley, has spent years studying the traces these systems leave behind, the tells that give them away and why recognizing them is never the entire solution. He’s skeptical of the AI mystique (he prefers the term “token tumbler”) and even less convinced of the idea that we can simply filter our way back to truth. His argument is plainer and harder: if we want a world where evidence still counts, we must rebuild the rules of liability and go after the choke points that make digital deception cheap and profitable. Scientific American spoke with Farid about where deepfakes are headed and what works to blunt them...

Read more...

Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley. .  

Last updated: February 25, 2026