Sensitive-by-Distance: Quasi-Health Data in the Algorithmic Era

21 Pages Posted: 31 Mar 2017 Last revised: 14 Aug 2017

See all articles by Gianclaudio Malgieri

Gianclaudio Malgieri

Universiteit Leiden, eLaw; Vrije Universiteit Brussel (VUB) - Faculty of Law

Date Written: March 31, 2017

Abstract

“Quantified Self” apps and wearable devices collect and process an enormous amount of “quasi-health” data — information that does not fit within the legal definition of “health data”, but that is otherwise revelatory of individuals’ past, present, and future health statuses, like information about sleep-wake schedule or eating habits).

This article offers a new perspective on the boundaries between health and non-health data: the “data-sensitiveness-by-(computational)-distance” approach — or, more simply, the “sensitive-by-distance” approach. This approach takes into account two variables: the intrinsic sensitiveness (static variable) of personal data and the computational distance (a dynamic variable) between some kinds of personal data and pure health (or sensitive) data, which depends upon the computational capacity available in a given historical period of technological (and scientific) development.

Computational distance should be considered both objectively and subjectively. From an objective perspective, it depends on at least three factors: (1) the level of development of data retrieval technologies at a certain moment; (2) the availability of “accessory data” (personal or non-personal information), and (3) the applicable legal restraints on processing (or re-processing) data. From a subjective perspective, computational capacity depends on the specific data mining efforts (or ability to invest in them) taken by a given data controller: economic resources, human resources, and the utilization of accessory data.

A direct consequence of the expansion of augmented humanity in collecting and inferring personal data is the increasing loss of health data processing “legibility” for data subjects. Consequently, the first challenge to be addressed when searching for a balancing test between individual interests and other (public or commercial) interests is the achievement of a higher level of health data processing legibility, and thereby the empowerment of individuals’ roles in that processing. This is already possible by exploiting existing legal tools to empower data subjects — for instance, by supporting the full exercise of the right to access (i.e. awareness about the finality of processing and the logic involved in automated profiling), the right to data portability, and the right not to be subject to automated profiling.

Keywords: Privacy, Data protection, Quantified Self, Algorithms, Quasi-IP, Trade Secret

Suggested Citation

Malgieri, Gianclaudio, Sensitive-by-Distance: Quasi-Health Data in the Algorithmic Era (March 31, 2017). Available at SSRN: https://ssrn.com/abstract=2944123

Gianclaudio Malgieri (Contact Author)

Universiteit Leiden, eLaw ( email )

Steenschuur 25
Leiden, 2311
Netherlands

Vrije Universiteit Brussel (VUB) - Faculty of Law ( email )

Brussels
Belgium

HOME PAGE: http://www.vub.ac.be/LSTS/members/malgieri/

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
17
Abstract Views
272
PlumX Metrics