AI as a Constituted System: Accountability Lessons from an LLM Experiment
Nabben, K. (2024). AI as a constituted system: accountability lessons from an LLM experiment. Data & Policy, 6, e57. doi:10.1017/dap.2024.58
21 Pages Posted: 23 Oct 2023 Last revised: 1 Dec 2024
Date Written: September 1, 2023
Abstract
This study explores the integration of a pre-trained Large Language Model (LLM) with an organisation's Knowledge Management System (KMS) via a chat interface, focusing on the practicalities of establishing and maintaining AI infrastructure for data storage and access, and the considerations to ensure responsible governance. The study adopts the concept of ‘AI as a constituted system’ to emphasise that the social, technical, and institutional factors that contribute to AI’s governance and accountability. Through an ethnographic approach, the paper details the iterative processes of negotiation, decision-making, and reflection among stakeholders as they develop, implement, and manage the AI system. The study reveals that LLMs can be effectively governed and held accountable to the interests of stakeholders within specific contexts, when facilitated by clear institutional boundaries that foster innovation, while navigating risks related to data privacy and AI misbehaviours. This is attributed to distinct policy creation processes guiding the AI's operation, clear lines of responsibility, and localised feedback loops that ensured clear accountability for actions taken. This research offers a foundational perspective to better understand accountability and governance of algorithms within organisational contexts. It also suggests a future where AI is not universally scaled but consists of localised, customised LLMs that are tailored to stakeholder interests.
Keywords: AI, accountability, governance, ethnography, LLM
JEL Classification: O30, O32, Z13
Suggested Citation: Suggested Citation