By Vittoria Elliott
WHEN GOOGLE ANNOUNCED the launch of its Bard chatbot last month, a competitor to OpenAI’s ChatGPT, it came with some ground rules. An updated safety policy banned the use of Bard to “generate and distributde content intended to misinform, misrepresent or mislead.” But a new study of Google’s chatbot found that with little effort from a user, Bard will readily create that kind of content, breaking its maker’s rules.
Researchers from the Center for Countering Digital Hate, a UK-based nonprofit, say they could push Bard to generate “persuasive misinformation” in 78 of 100 test cases, including content denying climate change, mischaracterizing the war in Ukraine, questioning vaccine efficacy, and calling Black Lives Matter activists actors...
Hany Farid, a professor at the UC Berkeley’s School of Information, says that these issues are largely predictable, particularly when companies are jockeying to keep up with or outdo each other in a fast-moving market. “You can even argue this is not a mistake,” he says. “This is everybody rushing to try to monetize generative AI. And nobody wanted to be left behind by putting in guardrails. This is sheer, unadulterated capitalism at its best and worst...”
Hany Farid is a professor at the UC Berkeley School of Information and EECS. He specializes in digital forensics.