Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages
Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law, Forthcoming
20 Pages Posted: 7 Mar 2020
Date Written: February 8, 2020
Establishing liability for damages caused by AI used to be rather straightforward when only one or few stakeholders are involved, or when the AI could only take a limited range of pre-defined decisions in accordance with specific parameters defined by a human programmer. However, AI usually involves several stakeholders and components (e.g. sensors and hardware, softwares and applications, data itself and data services, connectivity features) and recent forms of AI are increasingly able to learn without human supervision which makes it difficult to allocate liability between all stakeholders. This contributions maps various possibilities, identify their challenges and explore lines of thought to develop new solutions or close the gaps, the whole from a global perspective. Existing liability regimes already offer basic protection of victims, to the extent that specific characteristics of emerging technologies are taken into account. Consequently, instead of considering new liability principles (solutions that require certain amendments of the current liability regimes), one should consider simply adapting current fault-based liability regimes with enhanced duties of care and precisions regarding shared liability and solidarity between tortfeasors, which could potentially be done through case-law in most jurisdictions. When it comes to the calculation of damages, given the difficulties in calculating the damage and to take into account the specificities of IPR or privacy rights, economic methods may be considered to calculate the damages in general, such as the Discounted Cash Flow Method (DCF) and the Financial Indicative Running Royalty Model (FIRRM), as well as the Royalty Rate Method and case-law about Fair, Reasonable and Non-Discriminatory license terms (FRAND). This path will lead to a certain “flat-rating“damages (“barémisation“ or “forfaitisation“), at least when IPR and personal data are illegally used by AI-tools and mostly not visible, hence barely quantifiable in terms of damages.
Suggested Citation: Suggested Citation