Key Elements of Responsible Artificial Intelligence - Disruptive Technologies and Human Rights

Freiburger Informationspapiere, January 2020

28 Pages Posted: 6 Mar 2020

Date Written: January 1, 2020


One major challenge facing human kind in the 21st century the widespread use of Artificial Intelligence (AI). Hardly a day passes without news about the disruptive force of AI – both good and bad. Some warn that AI could be the worst event in the history of our civilization. Others stress the chances of AI diagnosing, for instance, cancer, or supporting humans in the form of autonomous cars. However, because AI is so disruptive the call for its regulation is widespread, including the call by some actors for international treaties banning, for instance, so-called “killer robots”. Nevertheless, until now, there is no consensus how and to which extent we should regulate AI. This paper examines whether we can identify key elements of responsible AI, spells out what exists as part “top down” regulation, and how new guidelines, such as the 2019 OECD Recommendations on AI can be part of a solution to regulate AI systems. In the end, a solution is proposed that is coherent with international human rights to frame the challenges posed by AI that lie ahead of us without undermining science and innovation; reasons are given why and how a human rights based approach to responsible AI should inspire a new declaration at the international level.

Keywords: Artificial Intelligence; OECD Recommendations in AI; Human Rights-based Approach

Suggested Citation

Voeneky, Silja, Key Elements of Responsible Artificial Intelligence - Disruptive Technologies and Human Rights (January 1, 2020). Freiburger Informationspapiere, January 2020, Available at SSRN:

Silja Voeneky (Contact Author)

University of Freiburg - Faculty of Law ( email )

D-79098 Freiburg

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics