A Framework for Systematically Applying Humanistic Ethics when Using AI as a Design Material
Temes de Disseny 35: 178-197, https://doi.org/10.46467/TdD35.2019.178-197
10 Pages Posted: 26 Sep 2019 Last revised: 9 Oct 2020
Date Written: July 1, 2019
As machine learning and AI systems gain greater capabilities and are deployed more widely, we - as designers, developers, and researchers - must consider both the positive and negative implications of their use. In light of this, PARC's researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals' privacy, protect personal data, and enable freedom of expression and equality.
Technology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.
Keywords: Ethics, Technology and Society, AI Design
Suggested Citation: Suggested Citation