Analyzing Dilemmas Posed by Artificial Intelligence and 4IR Technologies Requires Using All Available Models, Including the Existing International Human Rights Framework and Principles of AI Ethics

57 Pages Posted: 6 Jul 2021 Last revised: 21 Jul 2021

Date Written: June 25, 2021

Abstract

The epoch in which we now live is referred to as the ‘4th industrial revolution’. The 1st, 2nd and 3rd industrial revolutions were characterized, respectively, by the use of water and steam power in the mechanization of production (circa 1750-1820 , the use of electricity to power mass production (circa 1870-1920), and the use of electronics and information technologies in the mass automation of production and processing (circa 1950 -). The 4th industrial Revolution (4IR) is a development upon the third (circa 1990 -), and it is characterized by a fusion of technologies that blur the digital, physical, and biological spheres (ex. cyberspace, virtual and augmented reality, body-machine interface and robotics)

Certain is the guaranteed ubiquitous adoption of these technologies, the increasing use and normalization of such technologies in everyday life, government service provision and industry. Futurism, or general artificial intelligence, is a reference to the philosophical/science fiction discussions that are emerging as a result of these changes (e.g. debates around the ‘singularity’, transhumanism, and posthumanism – often presented in utopian/dystopia terms). Thus, the definition of digital ethics can be expanded and expressed in terms of the impacts of new digital technologies, through analysis of potential opportunities and risks in contemporary and future contexts (i.e. it is an applied ethics).

Many are working on forward‑looking policy frameworks and governance protocols, with broad multistakeholder engagement and buy‑in, to accelerate the adoption of emerging technologies in the global public interest, such as artificial intelligence (AI) and machine learning (ML) blockchain, 5G, data analytics, quantum computing, autonomous vehicles, synthetic biology, the internet of things (IoT), and killer robots or autonomous weapons systems (AWS). We have gained insight into the unequal distribution of the positive and negative impacts of AI on human rights throughout society, and have begun to explore the power of the human rights framework to address these disparate impacts.

Although internationally recognized laws and standards on human rights provide a common standard of achievement for all people in all countries, more work is needed to understand how they can be best applied in the context of disruptive technology.

AI systems raise myriad questions for society and democracy, only some of which are covered or addressed by existing laws. In order to fill these perceived gaps, a vocal group of governments, industry players, academics, and civil society actors have been promoting principles or frameworks for ethical AI.

COVID-19 accelerated the use of AI in all countries and all fields. The pandemic accelerated the transition to a society that is increasingly based on the use of AI. This also increased the threats new risks related to human rights in the context of AI deployment. The coronavirus pandemic accelerated a dramatic decline in global internet freedom. State and nonstate actors in many countries exploited opportunities created by the pandemic to shape online narratives, censor critical speech, and build new technological systems of social control.

The question of whether corporations can act ethically is particularly relevant for Big Tech. Many of these firms are oligopolies that individuals and governments alike depend on completely, though they have little to no capacity to independently remedy issues when they arise, as Project Maven showed. Artificial intelligence and automated decision-making tools are increasing in power and centrality, and technology companies retain large troves of private data that it sells. These companies are at the forefront of technological innovation and may be caught up with the factual question of what can be done rather than the normative question of whether it should be done. All these issues arise in a field where there is little to no government regulation or intervention. The threats AI poses to society are so new, that the legal system is struggling to impose sufficient values and restrictions. Thus, a coherent approach to addressing AI ethics, values and consequences is, indeed, urgently needed.

In May 2019, 42 countries adopted the Organization for Economic Co-operation and Development (OECD) AI Principles, a legal recommendation that includes five principles and five recommendations related to the use of AI. To ensure the successful implementation of the Principles, the OECD launched the AI Policy Observatory in February 2020. The Observatory publishes practical guidance about how to implement the AI Principles, and supports a live database of AI policies and initiatives globally. It also compiles metrics and measurement of global AI development and uses its convening power to bring together the private sector, governments, academia, and civil society.

The AI ethics and governance initiatives discussed are cause for optimism that the global community will use all available models and brainpower for analysis and ultimately global governance of AI.

Though some argue that the process of ethical norm diffusion into hard domestic law sidelines traditional international law, the two should not be viewed as competing and oppositional.

The existing human rights framework provides a substantial foundation for policy development analysis to address the wide range of societal concerns about AI. If we hope to establish a human-centric basis for global governance of AI, we best utilize the existing human rights and AI ethics frameworks as synergistic, not competing approaches. The human rights community and the AI ethics community need to continue to work with and learn from each other on the many AI specific ethic insights that have already been developed and need to be embedded in human rights doctrine. More cross disciplinary, cross sector collaboration, and education is required. AI developers need more exposure to the existing human rights framework, just as the human rights community needs greater understanding on how AI works to develop the emerging concept of human rights by design.

We are living in the 4th Industrial Revolution of exponential technological innovation. Policy makers are aware of problems regarding human rights, algorithmic bias and built-in discrimination, data access and autonomous weapons. We must urgently continue to work to solve these problems and set global technology standards together. Countries with compatible political systems and values need to collaborate on the technosocio-economic and ethical issues surrounding AI, to maximize the benefits from the benevolence expected from Trustworthy AI.

Keywords: Artificial Intelligence, AI Ethics, Project Maven, COVID-19, Project Dragonfly, Human-Centric AI, International Human Rights, Responsible AI, Trustworthy AI, 4th Industrial Revolution,4IR

JEL Classification: K33,O32, O39, I19, O20, K00, K10, K20, K30, K39

Suggested Citation

von Struensee, Susan, Analyzing Dilemmas Posed by Artificial Intelligence and 4IR Technologies Requires Using All Available Models, Including the Existing International Human Rights Framework and Principles of AI Ethics (June 25, 2021). Available at SSRN: https://ssrn.com/abstract=3874279 or http://dx.doi.org/10.2139/ssrn.3874279

Susan Von Struensee (Contact Author)

Global Research Initiative ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
85
Abstract Views
652
rank
421,910
PlumX Metrics