Artificial Intelligence in Canadian Healthcare: Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?
Bradley Henderson, Colleen M. Flood, and Teresa Scassa, “Artificial Intelligence in Canadian Healthcare: Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?” (2021) 19:2 Canadian Journal of Law and Technology (Forthcoming)
24 Pages Posted: 2 Nov 2021
Date Written: October 4, 2021
Artificial intelligence (AI) in healthcare has the potential to improve clinical outcomes, healthcare quality, and objectivity in clinical decision-making. However, it also has the potential to perpetuate or exacerbate discrimination in healthcare by producing outputs on the basis of arbitrary traits such as race, sex, and sexual orientation with no clinical, moral, or legal justification. Discrimination in healthcare-related AI may arise from algorithmic bias (i.e., the unwarranted skewing of outputs due to coding or machine learning bias) or data bias (i.e., the use of non-representative training data or the masking of systemic discrimination), or a combination thereof. Several prominent examples of AI in healthcare have highlighted the potential discriminatory nature and effects of biased healthcare-related AI applications. If these problems are not mitigated, the benefits of healthcare-related AI are likely be realized at the expense of marginalized groups. Existing ex post legal protections (responses that kick in after harm has occurred), such as claims in negligence, under human rights legislation, or under the Charter, offer some redress from healthcare-related AI discrimination; however, they are likely insufficient to adequately deter and remedy such discrimination. Not only are these litigation mechanisms costly, lengthy, and uncertain, highly obscure algorithms and difficulties in accessing algorithms and their training data create additional evidentiary barriers, especially in proving causation between a harm and a healthcare-related AI decision. Existing ex ante legal protections (i.e., the regulation of AI applications in healthcare), namely the federal regulation of medical devices, provide a general oversight framework for many healthcare-related AI applications. But these frameworks should be further modernized: Regulators could employ secondary software to monitor AI in healthcare in near real time to rapidly identify issues, and they should treat potential discrimination as a safety issue. Overall, algorithmic and data bias resulting in discrimination is a complex problem that requires broad, concerted responses from all levels of government and several private actors. Federal and provincial governments must enable access to high quality, robust, and representative training data, continue to improve healthcare data governance, including data gathering and standardization, and implement or modernize public and private sector data protection legislation—greater access to healthcare data must be accompanied by stronger patient privacy protections. Innovators will bear a high degree of moral responsibility in the development of healthcare-related AI over the coming years and should develop algorithms in highly interdisciplinary teams so that algorithms can exclude sensitive patient characteristics where unwarranted and include clinically meaningful characteristics where warranted. The status quo of Canadian healthcare is not perfect, and healthcare-related AI applications offer promising innovations, yet we must also be careful not to perpetuate, exacerbate, or create new sources of bias that may result in discrimination in healthcare.
Keywords: Artificial Intelligence, Healthcare, Health Data, Algorithmic Bias, Data Bias, Algorithmic Discrimination, Health Data Governance, Medical Device Regulation.
Suggested Citation: Suggested Citation