Feb 14, 2022

Scientific American Covers Berkeley Research on Trustworthiness of AI-Generated Faces

From Scientific American

Humans Find AI-Generated Faces More Trustworthy Than the Real Thing

Emily Willingham 

When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal. The creator of the “deeptomcruise” account on the social media platform was using “deepfake” technology to show a machine-generated version of the famous actor performing magic tricks and having a solo dance-off...

A new study published in the Proceedings of the National Academy of Sciences USA provides a measure of how far the technology has progressed. The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article. “We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that “these faces could be highly effective when used for nefarious purposes.” ...

The researchers were not expecting these results. “We initially thought that the synthetic faces would be less trustworthy than the real faces,” says study co-author Sophie Nightingale...

Read more...

Hany Farid is a professor at the University of California, Berkeley with a joint appointment in electrical engineering & computer sciences and the School of Information. 

Sophie Nightingale is a former postdoctoral scholar at the UC Berkeley School of Information.

Last updated:

February 16, 2022