Algorithms that Remember: Model Inversion Attacks and Data Protection Law
376 Philosophical Transactions of the Royal Society A 20180083, 2018, DOI: 10.1098/rsta.2018.0083
15 Pages Posted: 6 Aug 2018 Last revised: 19 Oct 2018
Date Written: July 12, 2018
Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around 'model inversion' and 'membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.
Keywords: data protection, gdpr, machine learning, model inversion, membership inference, algorithmic accountability
Suggested Citation: Suggested Citation