Going Deep on Deepfakes (feat. Hany Farid)
By Perry Carpenter and Mason Amadeus
Welcome back to The FAIK Files! In this week’s episode: We sit down with deepfake expert Hany Farid to discuss the real-world harms of synthetic media. Exploring the physics of deepfake detection and why real-time streams might be easier to defend. The dangers of using AI to “enhance” images and hallucinate hidden details. A look at solutions like C2PA, watermarking, and the pressing need for platform accountability...
*** NOTES AND REFERENCES ***
What Keeps Hany Farid Up at Night?:
- The rising harms of non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) generated by AI.
- Voice cloning being weaponized for individual fraud and real-time deepfakes used by state-sponsored actors.
- Why the specific tool (like Sora or face swap) matters less than the overall threat vector and resulting harm.
Deepfake Detection - APIs vs. Physics:
- Hany's work at UC Berkeley and his company Get Real Security.
- Why detecting real-time manipulated video is actually easier than identifying well-crafted, file-based deepfakes online.
- How physical camera imperfections (noise) differ from the artifacts introduced by AI upsampling and diffusion models...
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley.
