Nov 9, 2023

Hany Farid Addresses Harms of AI to Marginalized Communities at Obama Foundation’s Democracy Forum

From NBC News

‘Big risks’: Obama and tech experts address harms of AI to marginalized communities

By Claretta Bellamy

More must be done to curb AI’s potential for harm or the further marginalization of people of color, a panel of experts weighing the ever-widening reach of AI warned last week.

The warning came during a panel discussion here at the Obama Foundation’s Democracy Forum, a yearly event for thought leaders to exchange ideas on how to create a more equitable society. This year’s forum was focused on the advances and challenges of AI...

During the panel, Hany Farid, a professor at the University of California, Berkeley, said that predictive AI in hiring, in the criminal legal system and even in banking can sometimes perpetuate human biases. 

“That predictive AI is based on historical data,” Farid said. “So, if your historical data is biased, which it is — against people of color, against women, against the LGBTQ community — well guess what? Your AI is going to be biased. So, when we push these systems without fully understanding them, all we are doing is repeating history...”


Hany Farid is a professor in the Department of Electrical Engineering & Computer Sciences and the School of Information at UC Berkeley. He specializes in digital forensics.

Last updated:

November 14, 2023