Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents

Cognitive Science, 45(10), e13032 (2021)

15 Pages Posted: 7 Mar 2023

See all articles by Markus Kneer

Markus Kneer

University of Zurich - Institute of Philosophy

Date Written: 2021

Abstract

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

Suggested Citation

Kneer, Markus, Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents ( 2021). Cognitive Science, 45(10), e13032 (2021), Available at SSRN: https://ssrn.com/abstract=4364040 or http://dx.doi.org/10.2139/ssrn.4364040

Markus Kneer (Contact Author)

University of Zurich - Institute of Philosophy ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
21
Abstract Views
144
PlumX Metrics