2020

Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions

Kluttz, D., Kohli, N., & Mulligan, D. (2020). Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions. In K. Werbach (Ed.), After the Digital Tornado: Networks, Algorithms, Humanity (pp. 137-152). Cambridge: Cambridge University Press.

Abstract

The standard response to concerns about “black box” algorithms is to make those algorithms transparent or explainable. Explainability is an additional design goal for machine-learning systems. Driven in part by growing recognition of the limits of transparency to foster human understanding of algorithmic systems, and in part by pursuit of other goals such as safety and human compatibility, researchers and regulators are shifting their focus to techniques and incentives to produce machine-learning systems that can explain themselves to their human users. Contestability fosters engagement rather than passivity, questioning rather than acquiescence. Professionals appropriate technologies differently, employing them in everyday work practice, as informed by routines, habits, norms, values and ideas and obligations of professional identity. Drawing attention to the structures that shape the adoption of technological systems opens up new opportunities for intervention.

Research Area(s)

Last updated:

July 5, 2022