Algorithmic Bias and Corporate Responsibility: How companies hide behind the false veil of the technological imperative
Ethics of Data and Analytics. Kirsten Martin (Ed.). Taylor & Francis.
19 Pages Posted: 16 Aug 2021 Last revised: 23 Aug 2021
Date Written: August 14, 2021
Abstract
In this chapter, I argue that acknowledging the value-laden biases of algorithms as inscribed in design allows us to identify the associated responsibility of corporations that design, develop, and deploy algorithms. Put another way, claiming algorithms are neutral or that the design decisions of computer scientists are neutral obscures the morally important decisions of computer and data scientists. I focus on the implications of making technological imperative arguments: framing algorithms as evolving under their own inertia, as providing more efficient, accurate decisions, and as outside the realm of any critical examination or moral evaluation. I argue specifically that judging AI on efficiency and pretending algorithms are inscrutable produces a veil of the technological imperative which shields corporations from being held accountable for the value-laden decisions made in the design, development and deployment of algorithms. While there is always more to be researched and understood, we know quite a lot about testing algorithms. I then outline how the development of algorithms should be critically examined to elucidate the value-laden biases encoded in design and development. The moral examination of AI pierces the (false) veil of the technological imperative.
Keywords: algorithms, AI, ethics, technological imperative, corporate responsibility
Suggested Citation: Suggested Citation