Taking Turing by Surprise? Designing Autonomous Systems for Morally-Loaded Contexts

23 Pages Posted: 28 Aug 2017 Last revised: 30 Aug 2018

See all articles by Sylvie Delacroix

Sylvie Delacroix

University of Birmingham - Birmingham Law School; Alan Turing Institute - Alan Turing Institute

Date Written: August 1, 2017

Abstract

There is much to learn from Turing’s translating Lady Lovelace’s ‘objection’ - ‘computers cannot originate anything new’ - into a question about surprises. Surprises, and the model changes they often generate, are the object of renewed interest in the Machine Learning literature: given the tendency for Bayesian model certainty to come close to zero as the number of data samples increases, optimizing the learning performance of systems within dynamic environments requires that systems be made to look for so-called ‘black-swan events’ in a bid to preserve an adequate degree of model plasticity. For humans this plasticity is at least as desirable, and typically compromised by the weight of habits, rather than statistical certainty. Shaking off the former is a more painful experience than questioning the latter. This paper aims to throw light on the asymmetry in the mechanisms underlying model change in humans v. machines for two reasons. First because this asymmetry has important implications when it comes to the challenges inherent in designing autonomous systems meant for morally loaded contexts. Current efforts to ethically ‘train’ such systems pay little attention -if any- to the difficulties that stem from the unavoidable need for change in those systems’ moral stances. To appreciate the significance of these difficulties, one needs to understand both the distinct role of habit reversal in the mechanisms underlying moral change in humans and the fact that machines are unlikely to ever experience habit reversal in that way. Once one takes on board the implications of this asymmetry, it becomes clear that, no matter how ‘ethically aligned’ they may have been initially, such systems are likely to ‘leap morally away’ - most likely past a point of mutual intelligibility - thus calling into question their desirability.

Keywords: Machine Learning, Turing, autonomous systems, decision-support systems, ethics, surprise, habit, value-alignment problem, Lady Lovelace

Suggested Citation

Delacroix, Sylvie, Taking Turing by Surprise? Designing Autonomous Systems for Morally-Loaded Contexts (August 1, 2017). Available at SSRN: https://ssrn.com/abstract=3025626 or http://dx.doi.org/10.2139/ssrn.3025626

Sylvie Delacroix (Contact Author)

University of Birmingham - Birmingham Law School ( email )

Edgbaston
Birmingham, B15 2TT
United Kingdom

HOME PAGE: http://https://www.birmingham.ac.uk/staff/profiles/law/delacroix-sylvie.aspx

Alan Turing Institute - Alan Turing Institute ( email )

96 Euston Road
London, NW1 2DB
United Kingdom

Register to save articles to
your library

Register

Paper statistics

Downloads
92
rank
264,376
Abstract Views
577
PlumX Metrics