From Rolling Stone
‘It’s Personality Theft’: How Creators Are Fighting Back Against AI Deepfakes
By Ella Chakarian
Yanina Oyarzo spends most of her days behind a mic in a Los Angeles studio, where she records episodes for her podcast about self-confidence, dating, and everything in between. The content creator has built an audience of 90,000 on Instagram, where she posts beauty and lifestyle content. So when she recently came across a video of herself promoting a national personal injury law firm based in Arizona, she was stumped.
“Have you or someone you loved had serious problems after getting a chemo or PowerPort implanted?” an AI-generated avatar resembling Oyarzo asked in the clip. The backdrop of the video had the same neutral colorway as her L.A. recording studio. The voice was not hers, but the face looked like an overly filtered version of herself...
What content creators are experiencing, says Hany Farid, cofounder and chief science officer at cybersecurity company GetReal Security, is less a deepfake problem, but “an identity problem.” With generative AI innovation and widespread adoption by everyday users, anyone with a brief clip or single image online can have their likeness stolen, regardless of follower count. For smaller creators and management teams, ongoing monitoring and takedown efforts can turn into full-time jobs. “Anybody can create an avatar of you, and then anybody can monetize that,” Farid says. “Are you going to be the Internet’s police and keep looking for your face and your likeness?”
Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley.
