By Jason Bloomberg
“On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’” wrote computing pioneer Charles Babbage in 1864. “I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
And thus, the fundamental software principle of ‘garbage in, garbage out’ was born. Today, however, artificial intelligence (AI) has raised the stakes on Babbage’s conundrum, as the ‘garbage out’ from AI leads to appalling examples of bias.
AI – in particular, both machine learning and deep learning – take large data sets as input, distill the essential lessons from those data, and deliver conclusions based on them...
Whether the AI algorithms are themselves biased is also an open question. “[Machine-learning algorithms] haven’t been optimized for any definition of fairness,” says Deirdre Mulligan, associate professor, UC Berkeley School of Information. “They have been optimized to do a task.”