Introducing the Agentic AI Risk Management Profile
Organized by the Center for Long-Term Cybersecurity
One of the most important recent developments in artificial intelligence (AI) has been the emergence of agentic AI, or AI agents, systems that can act autonomously to plan and carry out tasks. While these systems present many of the same risks as other advanced AI systems, their ability to operate independently introduces new challenges that demand tailored governance and risk-management approaches.
Join the AI Security Initiative (AISI) at the UC Berkeley Center for Long-Term Cybersecurity (CLTC) for the launch event of the new Agentic AI Risk Management Standards Profile (Agentic AI Profile), a report that examines the unique risks posed by agentic AI, and introduces effective approaches for assessing, managing, and mitigating those risks. The panel will explore how agentic AI risk management differs from general-purpose AI risk management, and what it will take to develop and deploy agentic AI systems in a safe and secure manner.
This webinar will feature a presentation from Deepika Raman, AI Standards Development Researcher at AISI and a Non-Resident Research Fellow at CLTC, followed by a panel discussion moderated by Nada Madkour, Senior AI standards Development Researcher at AISI and a Non-Resident Research Fellow at CLTC. The panel will include:
Panelists
- Alan Chan: Research Fellow at Center for the Governance of AI (GovAI)
- Dr. Marta Bienkiewicz: Policy and Partnerships Manager at the Cooperative AI Foundation
- Benjamin Larsen: Initiatives Lead, AI Systems and Safety, World Economic Forum
- Krystal Jackson: AI Standards Development Researcher and Non-Resident Research Fellow, UC Berkeley, CLTC
