Accountability in Computer Systems
The Oxford Handbook of the Ethics of AI. Dubber, Markus D., Frank Pasquale, and Sunit Das, Eds. Oxford University Press. 2020.
16 Pages Posted: 19 Jun 2020
Date Written: June 30, 2020
Capturing human values such as fairness, privacy, and justice in software systems is challenging. Values are abstract and may be contested, or at least viewed differently by different stakeholders, meaning they resist both definition and the concrete specification necessary to build machines or engineered systems. Choices in designing systems to embody values are political, and implicate structures beyond the system in question, trading off benefits and costs for different stakeholders. But this does not place computer systems beyond governance: the creators, operators, and controllers of such systems can and must be held accountable for the outcomes their systems effect. Accountability consists of a relationship focused on answerability: one agent or entity is accountable to another for certain outcomes in certain contexts. Operationalizing that accountability relationship requires keeping records – accounts – of how systems operated and were created. The entity to which an agent is held accountable can then determine responsibility, assigning praise or blame for the relevant outcomes and allocating consequences, ascribing moral valence to the agent’s actions and the resultant outcomes. Most abstractly, judgements about responsibility can serve to establish the fidelity of system behaviors to operative social, political, legal, and moral norms. Accountability is the best framework for considering the governance of values in computer systems, providing a concrete and achievable approach to engaging abstract questions around values and ideals.
Keywords: Accountability, Artificial Intelligence, Governance, Audit, Responsibility
Suggested Citation: Suggested Citation