A System of Governance for Artificial Intelligence through the Lens of Emerging Intersections between AI and EU Law
in A. De Franceschi - R. Schulze (eds.), Digital Revolution – New challenges for Law, 2019
55 Pages Posted: 25 Apr 2019 Last revised: 21 Nov 2019
Date Written: April 9, 2019
The work provides an overview and a comment of the Communication on Artificial Intelligence (AI) adopted by the European Commission in April 2018. By offering a bird’s-eye view of those law and policy areas potentially relevant for or affected by AI, the AI Communication sets the stage for understanding how pervasively and extensively AI is likely to be mainstreamed in our economies and societies. Whether it is about safety of products, liability, consumer protection, personal data protection or the foundational values, principles and rights on which the European project is based on, AI is very rapidly cutting across domains. The work identifies and investigates some of the many intersections between AI and EU law. Two main “disrupting” trends emerge.
According to the first trend, AI seems to exercise some pressure on existing regulatory frameworks, such as in the areas of product safety, liability and consumer protection.
As regards product safety, the main concerns seem to revolve around the unpredictability risk of AI. While certain factual characteristics (possibly limitations) of AI as it functions today cannot and should not be denied, the policy debate on AI safety should focus on what potential risks brought about by AI (or rather by specific AI applications) can be considered as socially acceptable when weighed against potential benefits. Even though the challenges posed by AI may generate some pressure on the existing EU product safety frameworks, it seems that EU safety law as a broader normative field has at its disposal a varied set of regulatory tools and approaches that can be relevant sources of inspiration and reference for a discussion on the safety of AI-powered products.
In the field of product liability, although one should note that the Product Liability Directive (PLD) is not necessarily the only tool that can be invoked by victims in case of risks and damages linked to AI-powered products, there seem to be elements suggesting that AI (in general or with regard to certain of its product specific applications) may put under stress the continued suitability of the technology neutral design of the PLD - or at least some provisions thereof - to the extent that the PLD is expected to apply, in its current form, to both “smart” and “non-smart” products.
The protection of consumers in the context of profiling and targeting practices in the business-to-consumers transactions is an area where the General Data Protection Regulation (GDPR) is particularly relevant. To the extent GDPR rules effectively enhance the data subjects’ empowerment vis-a-vis traders and/or curtail the ability of traders to engage in manipulative and unfair practices, then there may be less need for a fine-tuning of dedicated consumer law instruments (such as the Unfair Commercial Practices Directive, the Consumer Rights Directive and the Directive on Unfair Terms in Consumer Contracts) in order to take account of the specificities of commercial transactions mediated by sophisticated algorithms. At the same time, consumer protection could be an interesting testing ground for the potential of AI to empower consumers and civil society in general: the very same tools, techniques and methods used by companies to pursue their commercial interests could also serve the purpose to re-balance the traditional asymmetry of information, power and knowledge impacting negatively on consumers.
Contrary to what happens in the legal domains mentioned above, a different “disrupting” trend emerges in the field of the protection of personal data. Here, the several intersections between AI and the GDPR can essentially be framed in terms of the law disrupting certain technological uses and applications of AI. Due to the fact that AI uses and applications in the context of commercial transactions, and, more generally, of the algorithm-mediated economic, social and political life of individuals extensively rely on and process personal data, the GDPR emerges as a key piece of legislation in the space. While the data protection authorities and the courts will certainly specify and fine-tune its principles and provisions as appropriate, the GDPR presents itself as a robust framework poised to capture and effectively curb at least those uses and applications of AI that appear most egregious and intolerable in light of the degree of legal protection for individual rights and freedoms that is currently expected by citizens in our European society.
The work argues that, even if each legal or policy area where AI surfaces is confronted with distinct normative questions that may not necessarily be relevant for other areas, a connecting tissue is needed. This should take the form of a system of AI governance or cabine de regie which should combine - on an ongoing basis - up-to-date scientific and technical knowledge, internal legal and policy expertise specific to each sector and the authority to impart policy direction and to arbitrate, across the board, between the societal opportunities and the societal concerns that underlie the composite interaction between AI and the law.
Suggested Citation: Suggested Citation