From City Lab
New York Just Set a ‘Dangerous Precedent’ on Algorithms, Experts Warn
By Kate Kaye
It was supposed to be groundbreaking. When New York City’s task force to develop policy on algorithm technologies was introduced two years ago, it was praised as a beacon of transparent and equitable government. It was supposed to inform other policymakers grappling with how to address their own use of automated technologies that make decisions in place of humans.
But for all its good intentions, the effort was bogged down in a bureaucratic morass. The task force failed at even completing a first necessary step in its work: getting access to basic information about automated systems already in use, according to task force members and observers.
“The fact they were unable to even get information about what tools the city was using is very problematic,” said Deirdre Mulligan, associate professor at the UC Berkeley School of Information. Algorithmic tools “in and of themselves embed really significant policy choices.”
Deirdre K. Mulligan is an associate professor in the School of Information at UC Berkeley, a faculty director of the Berkeley Center for Law & Technology, and an affiliated faculty of the Berkeley Center for Long-Term Cybersecurity.