Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (We Robot 2016)

We Robot 2016 Working Paper

26 Pages Posted: 3 Apr 2016 Last revised: 21 Jul 2018

See all articles by M. C. Elish

M. C. Elish

Data & Society Research Institute

Date Written: March 20, 2016


A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability.

As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual. If there's an accident, we intuitively — and our laws, in practice — want someone to take the blame. The result of this ambiguity is that humans may emerge as “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

This paper employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems. The paper examines two historical cases of “moral crumple zones” in the fields of aviation and nuclear energy and articulates the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. The argument suggests that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

Keywords: HRI, ethics, driverless cars, human factors, automation, autonomy, human in the loop, HCI

Suggested Citation

Elish, M. C., Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (We Robot 2016) (March 20, 2016). We Robot 2016 Working Paper. Available at SSRN: or

M. C. Elish (Contact Author)

Data & Society Research Institute ( email )

36 West 20th Street
New York, NY
United States

Register to save articles to
your library


Paper statistics

Abstract Views
PlumX Metrics