Are You AI's Favorite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics

European Pharmaceutical Law Review, Lexxion Publisher, Issue 4/2021, Forthcoming

24 Pages Posted: 11 Apr 2022

See all articles by Anastasiya Kiseleva

Anastasiya Kiseleva

Vrije Universiteit Brussel (VUB)

Paul Quinn

Vrije Universiteit Brussel (VUB)

Date Written: August 24, 2021

Abstract

The article provides a legal overview of biased AI systems in clinical genetics and genomics. For the overview, two perspectives to look at bias are taken into consideration: societal and statistical. The paper explores how biases can be defined in these two perspectives and how generally they can be classified. Based on two perspectives, the paper explores three negative consequences of biases in AI systems: discrimination and stigmatization (as the more societal concepts) and inaccuracy of AI’s decisions (more related to the statistical perception of bias). Each of these consequences is analyzed within the frameworks they correspond to.

Recognizing inaccuracy as harm caused by biased AI systems is one of the most important contributions of the article. It is argued that once identified, bias in an AI system indicates possible inaccuracy in its outcomes. The article demonstrates it through the analysis of the medical devices framework: if it is applicable to AI applications used in genomics and genetics, how it defines bias, and what are the requirements to prevent them. The paper also looks at how this framework can work together with anti-discrimination and stigmatization rules, especially in the light of the upcoming general legal framework on AI. The authors conclude that all the frameworks shall be considered for fighting against bias in AI systems because they reflect different approaches to the nature of bias and thus provide a broader range of mechanisms to prevent or minimize them.

Note:
Funding Information: Paul Quinn works as a Professor at Vrije Universiteit Brussels. Anastasiya Kiseleva (the first author) is supported by the EUTOPIA co-tutele program.

Declaration of Interests: None to declare.

Keywords: Artificial Intelligence (AI); genetics; genomics; healthcare; bias; discrimination; medical devices; stigmatisation

Suggested Citation

Kiseleva, Anastasiya and Quinn, Paul, Are You AI's Favorite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics (August 24, 2021). European Pharmaceutical Law Review, Lexxion Publisher, Issue 4/2021, Forthcoming , Available at SSRN: https://ssrn.com/abstract=4039764 or http://dx.doi.org/10.2139/ssrn.4039764

Anastasiya Kiseleva (Contact Author)

Vrije Universiteit Brussel (VUB) ( email )

Brussels
Belgium

Paul Quinn

Vrije Universiteit Brussel (VUB) ( email )

Pleinlaan 2
http://www.vub.ac.be/
Brussels, 1050
Belgium

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
63
Abstract Views
216
rank
494,553
PlumX Metrics