Computational Forensics in the Age of AI

Monday, November 17, 2025
11:40 am - 1:00 pm PST
Justin Norman

Exploring Novel Methods for Deepfake Detection, Technical Methods for Biometric System Evaluation, and the Complexity of Data Privacy and Consent in High-Stakes Situations

As artificial intelligence systems become increasingly integrated into high-stakes use cases, fundamental questions emerge about their reliability, handling of human data used to train them, the ethics of capturing and manipulating this data, and our ability to detect AI-generated deceptions. This dissertation talk seeks to explore these challenges through three interconnected research trajectories.

First, a comprehensive evaluation methodology for forensic facial recognition systems is introduced, revealing that state-of-the-art biometric models may demonstrate dramatic accuracy degradation under real-world conditions such as poor resolution, extreme poses, and occlusions. 

Follow-on work investigates whether AI-powered image enhancement techniques, including super resolution, 3D and 2D head pose correction and facial image restoration, can improve recognition outcomes.  Additionally, this research proposes new techniques for predicting biometric recognition system failures and classifying failure modes.

Secondly, the talk will interrogate the risks of integrating human facsimile synthetic data into high-stakes machine learning tasks. Such work illuminates how even synthetic data handling practices can consolidate power while decoupling data from those it represents.

Third, a novel forensic technique for detecting deepfake video impersonations is introduced. By leveraging unnatural patterns in facial biometric characteristics, traditional machine learning techniques can reliably identify face-swap, lip-sync, and avatar-based deepfakes while maintaining robustness against adversarial manipulation.

Together, these research streams establish that the development of truly trustworthy high-stakes AI systems depends on deliberate technical investment and innovation in content authenticity and the evaluation of biometric systems, alongside critical examination of data practices and governance frameworks.


This lecture will also be live streamed via Zoom. You are welcome to join us either in South Hall or online.

For online participants

Online participants must have a Zoom account and be logged in. Sign up for your free account here. If this is your first time using Zoom, please allow a few extra minutes to download and install the browser plugin or mobile app.

Join the lecture online

Speaker

Justin D. Norman

Justin Norman is a fifth-year Ph.D. candidate at the School of Information, where he is advised by Hany Farid. His research interests include computer vision, generative AI (LLMs and VLMs) and machine learning engineering. He is a recipient of the Marcus Foster Fellowship and the FASPE Design & Technology Fellowship.

His Ph.D. projects are focused on deepfake detection, content authenticity, improving the robustness of generative computer vision systems, and LLM hallucination detection, mitigation and prevention.

Justin’s latest role in industry is CTO and head of AI at Vera AI. Previously he served as VP of data science, analytics and data products at Yelp. Before that, he was the director of research and ML/AI at Cloudera Fast Forward Labs, head of applied machine learning at Fitbit, the global head of Cisco’s enterprise data science office and a big data systems engineer with Booz Allen Hamilton. In another life, he served as a Marine Corps officer, with a focus in systems analytics and intelligence.

Last updated: October 24, 2025