By Jason Pohl
In just a few years, AI has gone from a tech novelty to a society-altering force. It has revolutionized scientific research, classrooms and workplaces. People around the planet have turned to it to break through writer’s block and to contest parking tickets.
But that growth has not come without risks. The economy is increasingly propped up by bets about the technology’s future — 80% of U.S. stock gains last year came from AI companies. Policy battles are ramping up as some seek to rein in tech companies and others opt for an unregulated Wild West. Deepfake and explicit videos are causing harm and blurring what is real in an already fragmented information environment.
As a global leader in the development of AI technology as well as research into the ethics, policies and practices around its use, UC Berkeley’s experts are at the forefront of this rapidly changing technology. Below, UC Berkeley News asked some of the campus’s leading AI experts to summarize in 100 words — and in a short phrase — the developments they’ll be monitoring in 2026.
‘Will the AI bubble burst?’
Current and planned spending on data centers represents the largest technology project in history. Yet many observers describe a bubble that is about to burst: revenues are underwhelming, the performance of large language models seems to have plateaued, and there are clear theoretical limits on their ability to learn straightforward concepts efficiently.
If the bubble bursts, the economic damage will be severe. But for the bubble not to burst, breakthroughs will need to happen that take us close to artificial general intelligence. AI developers have no cogent proposal for how to control such systems, leading to risks far greater than economic damage.
— Stuart Russell, professor of electrical engineering and computer sciences
‘Can we trust anything anymore?’
I will be watching the accelerating erosion of trust driven by increasingly convincing AI-generated media. In 2026, deepfakes will no longer be novel; they will be routine, scalable, and cheap, blurring the line between the real and the fake. This has profound implications for journalism, democracies, economies, courts and personal reputation.
I am especially concerned about the asymmetry: It takes little effort to create a fake, but enormous effort to debunk it after it spreads. How society adapts — technically, legally and culturally — to a world where seeing is no longer believing will be critical.
— Hany Farid, professor of information
‘AI-enabled discoveries that benefit people’
Major technology paradigm shifts like AI come with significant benefits and risks, and I expect AI will become ever more part of our daily lives in 2026.
Individuals and industries are finding exciting new uses for personalized agents and related technologies. For example, AI accelerates scientific discovery in ways that were previously unimaginable.
Conversations about the responsible and ethical use of AI should be prioritized across sectors and civil society. We must work collaboratively to mitigate AI’s potential harms and find inclusive ways to empower people.
Our challenge is to apply AI to advance knowledge, expand understanding and benefit humanity.
— Jennifer Chayes, dean of the UC Berkeley College of Computing, Data Science, and Society
‘Privacy risks created by chatbot logs’
People use AI chatbots for emotional support, spiritual guidance, relationship counseling, legal advice and intimacy, and they turn over reams of information about their follies, fantasies and fears.
Adam Raine’s ChatGPT logs cataloging struggles before his death by suicide are central to his family’s lawsuit. While Adam’s chats are being used to address harms, users’ logs risk disclosure in more troubling settings. A recent court order directed OpenAI to save all chats for a copyright lawsuit, and the Department of Homeland Security successfully demanded a user’s prompts.
Expect more demands on AI companies for personal data and lawsuits challenging how they collect and use it.
— Deirdre Mulligan, professor of information
Originally published as “11 things UC Berkeley AI experts are watching for in 2026” by Berkeley News on January 113, 2026.
