A Shallow Defence of a Technocracy of Artificial Intelligence: Examining the political harms of algorithmic governance in the domain of government
Sætra HS, A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government, Technology in Society (2020), doi: https://doi.org/10.1016/j.techsoc.2020.101283.
10 Pages Posted: 13 Dec 2019 Last revised: 15 Jun 2020
Date Written: June 8, 2020
Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is particularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short term and long term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential political harms of an AI technocracy. These are objections based on the ideas of ‘political man’ and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.
Keywords: Artificial intelligence, democracy, technocracy, legitimacy, participation
Suggested Citation: Suggested Citation