Blaming Automated Vehicles in Difficult Situations
22 Pages Posted: 15 Oct 2020 Publication Status: PublishedMore...
The third driverless car competition of the DARPA Grand Challenge (Urban Challenge) in 2007 saw six autonomous vehicle teams finishing the event successfully. Since then, Automated Vehicles (AVs) made huge strides towards deployment on a large scale. Despite all this progress, AVs continue to make mistakes, some of which have resulted in the deaths of passengers and pedestrians. These crashes received wide coverage in the media and drew a parallel bleak picture on the public’s lack of enthusiasm for this technology. However, not all mistakes are equal. While some mistakes are avoidable, others are hard to avoid even by highly-experienced professional drivers. As they continue to shape citizens’ attitudes towards AVs, we need to understand whether people differentiate between different types of error, and whether these are treated proportionally. In this paper, we ask the following two questions: 1) when an automated car makes a mistake, does the perceived difficulty or novelty of the situation predict blame attributed to it? How does that blame attribution compare to a human driving a regular car? Through two studies we find that the amount of blame people attribute to machine drivers and human drivers is sensitive to the difficulty of the situation. However, while some situations could be more difficult for machine drivers and others are harder for human drivers, people blamed machine drivers more, regardless. Our results provide insights on a crucial, yet under-studied, angle in understanding psychological barriers impeding the public’s adoption of AVs.
Suggested Citation: Suggested Citation