The Accuracy, Fairness, and Limits of Predicting Recidivism
Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. These predictions are used in pretrial, parole, and sentencing decisions. Proponents of these systems argue that big data and advanced machine learning make these predictions more accurate and less biased than humans. Opponents, however, argue that predictive algorithms may lead to further racial bias in the criminal justice system. I will discuss an in-depth analysis of one widely used commercial predictive algorithm to determine its appropriateness for use in our courts. (This presentation is based on joint work with Julia Dressel.)
Hany Farid is the Albert Bradley 1915 Third Century Professor at Dartmouth. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989 and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth in 1999. He is the recipient of a Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship and is a fellow of the National Academy of Inventors. He is also the chief technology officer and co-founder of Fourandsix Technologies and a senior advisor to the Counter Extremism Project.
Later this year, Farid will be starting at Berkeley as a professor in the School of Information and the Department of Electrical Engineering and Computer Science.