Discriminatory Discretion: Theory and Evidence From Use of Pretrial Algorithms
Cosponsored by the School of Information and the Goldman School of Public Policy
This talk examines the biased usage of an algorithm, an understudied topic relative to the massive body of research that examines how algorithms may be biased. Using highly detailed administrative data, I study a large sample of high-stakes decision makers — New Jersey police and judicial officers — who are armed with a freely available algorithm.
When officers consider requesting a warrant for a defendant’s detention, they have complete discretion over whether to consult an algorithmic risk score that predicts a defendant’s likelihood of failing to appear in court as well as the defendant’s likelihood of being rearrested if released. I find that officers frequently choose not to look at information that is free, simple, and non-binding.
Moreover, the choice of whether to view the algorithm is far from random. Controlling for underlying risk, officers are less likely to consult the risk score for black defendants (relative to white defendants) accused of lesser crimes, but the relationship is reversed for severe crimes.
Then, once the risk scores are seen, officers are more likely to issue warrants for black defendants, again controlling for risk. The black-white warrant gap is smallest for the most and least risky defendants, and grows for more moderate-risk defendants.
I organize these empirical facts in a novel taste-based discrimination framework in which agents are averse to certain groups, but also averse to appearing prejudiced. The key prediction of this avoidant animus is that agents will discriminate more in situations that are more ambiguous in an effort to curate their preferred image. I conclude by discussing policy implications for prejudice reduction, automation, and the discretionary use of decision aids.
Diag Davenport is a presidential postdoctoral research associate at Princeton. His research focuses on shaping the societal impact of artificial intelligence by accounting for people's decision-making processes as they interact with these tools.
At the heart of his research lies the conviction that algorithms will revolutionize decision-making, thereby reshaping the landscape of many complex social problems. He is dedicated to proactively guiding this transformation in a positive direction. Consequently, he prioritizes research that not only pushes the frontiers of existing theory but also affords the opportunity to devise and evaluate policy.