Panoramic view of the UC Berkeley campus
Lecture

The Hidden Governance of AI and Other Threats to Democracy

Wednesday, April 8, 2026
12:10 pm - 1:30 pm PDT
Abigail Jacobs
Remote video URL

The values embedded in and around AI systems shape our lives. However, those values are enacted through obscured, diffuse, and disorganized design decisions. Technical, organizational, and critical interventions would require locating these decisions and understanding what happens when technical systems displace organizational processes. Yet such perspectives consistently fail to identify what values are being enacted, where, and to what ends.

I put forward a sociotechnical perspective on how to systematically uncover those values. For technologists and non-, I argue that this offers paths to better evaluate systems, mitigate harms, and empower more people with the ability and authority to contest the governance shaping their lives. For people trying to live in the world, this perspective lets us see how legitimacy, objectivity, and authority are laundered, power is reorganized, and expertise is displaced.


This lecture will also be live streamed via Zoom. You are welcome to join us either in South Hall or online.

For online participants

Online participants must have a Zoom account and be logged in. Sign up for your free account here. If this is your first time using Zoom, please allow a few extra minutes to download and install the browser plugin or mobile app.

Join the lecture online

Speaker

Abigail Jacobs

Abigail Jacobs is an assistant professor of information and of complex systems at the University of Michigan. Jacobs is a 2024 Microsoft Research AI & Society fellow and was selected for the 2025 Schmidt Sciences Humanities & AI Virtual Institute. At Michigan, she is affiliated with the Center for Ethics, Society, and Computing and the Michigan Institute for Data & AI in Society. She received a B.A. in mathematical methods in the social sciences and mathematics at Northwestern University and a Ph.D. in computer science from the University of Colorado Boulder, and she previously was a postdoc at UC Berkeley, a NSF GRFP fellow, and on the board of Women in Machine Learning, Inc.

With social scientists, humanists, and legal scholars, she adopts a sociotechnical approach to AI to understand the hidden assumptions built into seemingly objective machine learning systems and their technical and social implications. With computer scientists, her work uses the lens of measurement to improve AI evaluation and governance.

 

Last updated: April 13, 2026