Algorithms and the Propagation of Gendered Cultural Norms
Forthcoming for publication in French in “IA, Culture et Médias” (2022). Edited by: Véronique Guèvremont and Colette Brin. Presses de l’université de Laval.
13 Pages Posted: 15 Dec 2021
Date Written: December 7, 2021
Abstract
Artificial intelligence is increasingly being used to create technological interfaces - whether chatbots, personal assistants or robots whose function is to interact with humans. They offer services, answer questions, and even undertake domestic tasks, such as buying groceries or controlling the temperature in the home.
In a study of personal assistants with female voices, such as Amazon's Alexa and Apple's Siri, the United Nations Educational, Scientific and Cultural Organization (UNESCO) argued that these technologies could have significant negative effects on gender equality . In addition to the fact that these artificial intelligence (AI) systems are trained on gender-specific models, these female-voiced assistants all feature stereotypical female attributes. This problem is compounded by the fact that these systems were probably created primarily by male developers . These gender-specific assistants can pose a threat through the biased representation of women they generate, especially as they become increasingly ubiquitous in our daily lives. It is predicted that by the end 2021, there will be more voice assistants on the planet than human beings .
Given the increasing use of voice assistants trained with biased language models, the potential impact on gender norms is of concern. While isolation has increased significantly during COVID-19, there is a risk that some people's main 'female' interaction is with these voice assistants. If we are not careful, sexist representations of women, totally out of step with real women, will intrude into the privacy of the home or our smartphones, anywhere, anytime. Moreover, the models are essentially the same, leading to the reproduction of a single 'standard' and a cultural smoothing in human-machine interaction, denying the diversity of users of these products around the world.
While some have argued that learning algorithms may be less biased than humans, who are often influenced by discriminatory cultural norms of which they may not be aware , this is without regard to the fact that artificial intelligence (AI) is necessarily created by human beings whose way of thinking it incorporates. Indeed, it is easy to underestimate the importance of cultural norms in human decision-making. Artificial intelligence mimics the social biases of the data it has been given unless it is explicitly designed with different principles. It is therefore not surprising that artificial intelligence developed without built-in values only reflects already biased social norms.
This chapter explores the ambiguous impact of learning algorithms that threaten to propagate biased gender norms, while promising to eliminate them. Two main questions are posed here: How can we better understand and document gender biases that may be associated with other forms of domination? How can the technique be better used to mitigate its negative effects?
The objectives of the research are twofold. On the one hand, it aims to analyse the cultural biases embedded in machine learning models, such as sexist word associations and stereotypical female representations, from different cultural and geographical perspectives (1). On the other, it considers the ways in which machine learning could reverse gender-biased norms by providing predictions and decisions free of systemic discrimination or by minimising negative discriminants (2).
Suggested Citation: Suggested Citation