Oct 2, 2023

“AI Will Have Different Flavors in Different Cultures”: Lecturer Nina Beguš Speaks to Scientific American About AI

From Scientific American

The Assumptions You Bring into Conversation with an AI Bot Influence What It Says

By Nick Hilden

Do you think artificial intelligence will change our lives for the better or threaten the existence of humanity? Consider carefully—your position on this may influence how generative AI programs such as ChatGPT respond to you, prompting them to deliver results that align with your expectations.

“AI is a mirror,” says Pat Pataranutaporn, a researcher at the M.I.T. Media Lab and co-author of a new study that exposes how user bias drives AI interactions. In it, researchers found that the way a user is “primed” for an AI experience consistently impacts the results. Experiment subjects who expected a “caring” AI reported having a more positive interaction, while those who presumed the bot to have bad intentions recounted experiencing negativity—even though all participants were using the same program...

According to Nina Beguš, a researcher at the University of California, Berkeley, and author of the upcoming book Artificial Humanities: A Fictional Perspective on Language in AI, who was not involved in the M.I.T. Media Lab paper, it is “a good first step. Having these kinds of studies, and further studies about how people will interact under certain priming, is crucial...”

Read more...

Nina Beguš is a Postdoctoral Fellow at the Center for Science, Technology, Medicine & Society and an incoming lecturer at the I School. She specializes in artifical humanities. 

Last updated:

December 12, 2023