ChatGPT: More than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective

31 Pages Posted: 7 May 2023 Last revised: 3 Jul 2023

See all articles by Alejo José G. Sison

Alejo José G. Sison

University of Navarra, School of Economics and Business

Marco Tulio Daza

University of Navarra, School of Economics and Business, DATAI; Universidad de Guadalajara, Departamento de Sistemas de Información, CUCEA

Roberto Gozalo-Brizuela

Comillas Pontifical University - Department of Quantitative Methods

Eduardo César Garrido Merchán

Comillas Pontifical University - Department of Quantitative Methods

Date Written: April 6, 2023

Abstract

This article explores the ethical problems arising from the use of ChatGPT as a kind of generative AI and suggests responses based on the Human-Centered Artificial Intelligence (HCAI) framework. The HCAI framework is appropriate because it understands technology above all as a tool to empower, augment, and enhance human agency while referring to human wellbeing as a “grand challenge”, thus perfectly aligning itself with ethics, the science of human flourishing. Further, HCAI provides objectives, principles, procedures, and structures for reliable, safe, and trustworthy AI which we apply to our ChatGPT assessments. The main danger ChatGPT presents is the propensity to be used as a “weapon of mass deception” (WMD) and an enabler of criminal activities involving deceit. We review technical specifications to better comprehend its potentials and limitations. We then suggest both technical (watermarking, styleme, detectors, and fact-checkers) and non-technical measures (terms of use, transparency, educator considerations, HITL) to mitigate ChatGPT misuse or abuse and recommend best uses (creative writing, non-creative writing, teaching and learning). We conclude with considerations regarding the role of humans in ensuring the proper use of ChatGPT for individual and social wellbeing.

This is an original manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on June 27th, 2023, available online: https://doi.org/10.1080/10447318.2023.2225931

Keywords: ChatGPT, generative AI, HCAI, combating disinformation, AI ethics

JEL Classification: M15, I20, C00, J00

Suggested Citation

Sison, Alejo Jose G. and Daza, Marco Tulio and Gozalo-Brizuela, Roberto and Garrido Merchán, Eduardo César, ChatGPT: More than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective (April 6, 2023). Available at SSRN: https://ssrn.com/abstract=4423874 or http://dx.doi.org/10.2139/ssrn.4423874

Alejo Jose G. Sison

University of Navarra, School of Economics and Business ( email )

Campus Universitario
Pamplona, 31080
Spain

HOME PAGE: http://https://alejosison.wordpress.com/

Marco Tulio Daza (Contact Author)

University of Navarra, School of Economics and Business, DATAI ( email )

Campus Universitario
Pamplona, Navarra 31009
Spain

HOME PAGE: http://www.unav.edu

Universidad de Guadalajara, Departamento de Sistemas de Información, CUCEA ( email )

Periférico Norte N° 799
Zapopan, 45100
Mexico

HOME PAGE: http://https://www.cucea.udg.mx

Roberto Gozalo-Brizuela

Comillas Pontifical University - Department of Quantitative Methods

Eduardo César Garrido Merchán

Comillas Pontifical University - Department of Quantitative Methods ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
366
Abstract Views
1,358
Rank
152,723
PlumX Metrics