Ethics and Public Health of Driverless Vehicle Collision Programming

41 Pages Posted: 16 Apr 2019

Date Written: December 30, 2018


Driverless vehicles present a core ethical dilemma: there is a public health necessity and moral imperative to encourage the widespread adoption of driverless vehicles once they become demonstrably more reliable than human drivers, given their potential to dramatically reduce automobile fatalities, increase autonomy for disabled people, and improve land use and commutes. However, the very technologies that could enable autonomous vehicles to drive more safely than human drivers also imply greater moral responsibility for adverse outcomes. While human drivers must make split-second decisions in automobile collision scenarios, driverless car programmers have the luxury of time to reflect and choose deliberately how their vehicles should behave in collision scenarios. This implies greater responsibility and culpability, as well as the potential for greater scrutiny and regulation. Programmers must make premeditated decisions regarding whose safety to prioritize in inevitable collision scenarios—situations where a vehicle cannot avoid a collision altogether but can choose between colliding into different vehicles, objects, or persons.

With the recent bipartisan passage of the SELF DRIVE Act in the House and the rapid development of driverless vehicle technology, we are now entering a critical time frame for considering what priorities should govern driverless car inevitable collision behavior. This Article shall argue that prescribed “ethics” programing must be regulated by law in order to avoid the likely collective action problem of a marketplace that will reward “occupant-favoring” designs, despite a probable public preference (and arguable moral necessity) for occupant indifferent designs. This Article then considers a variety of different options for systems of driverless vehicle ethics programming. The most justifiable ethics programing system would be one where road users are discouraged from externalizing the dangers incurred by their transportation choices onto those whose transportation choices, if more widely adopted, would comparatively improve aggregate safety. This ethical programing system, which I term “incentive-weighted programing,” would promote public safety while also striking the most equitably justifiable balance between different road users’ interests.

Keywords: driverless vehicles, self-driving cars, autonomous vehicles, public health, ethics, applied ethics, public ethics, utilitarianism, trolley problems, tunnel problem, public health law, risk management, tort law, administrative law, bias, transportation law, artificial intelligence, robotics

JEL Classification: K23, K13, K23, K32, I14, I12, I31, R41, R42, L62, L63, O31, O32, O33, O38

Suggested Citation

Godwin, Samantha, Ethics and Public Health of Driverless Vehicle Collision Programming (December 30, 2018). 86 Tenn. L. Rev. 135 (2018), Yale Law School, Public Law Research Paper No. 666, Available at SSRN:

Samantha Godwin (Contact Author)

Yale University - Law School ( email )

P.O. Box 208215
New Haven, CT 06520-8215
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics