Feb 19, 2026

Hany Farid Speaks on Microsoft AI-Deception Blueprint

From MIT Technology Review

Microsoft has a new plan to prove what’s real and what’s AI online  

By James O’Donnell

AI-enabled deception now permeates our online lives. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting.

It is into this mess that Microsoft has put forward a blueprint, shared with MIT Technology Review, for how to prove what’s real online.

An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today’s most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms...

Hany Farid, a professor at UC Berkeley who specializes in digital forensics but wasn’t involved in the Microsoft research, says that if the industry adopted the company’s blueprint, it would be meaningfully more difficult to deceive the public with manipulated content. Sophisticated individuals or governments can work to bypass such tools, he says, but the new standard could eliminate a significant portion of misleading material.

“I don’t think it solves the problem, but I think it takes a nice big chunk out of it,” he says...

Read more...

Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley. 

Last updated: February 25, 2026