Responding to ethics being a data protection building block for AI
Journal of AI, Robotics and Workplace Automation, Volume 1, Number 1, 2021
15 Pages Posted: 2 Nov 2021
Date Written: September 23, 2021
As a key driver for the Fourth Industrial Revolution, AI has increasing effects on all areas of human lives. AI utilises and interacts with a large volume of data, including personal data or data related to individuals, which inevitably raises privacy and data protection concerns. Data protection authorities (DPAs) continue to stress that AI must comply with a set of data protection principles that were first introduced nearly half of a century ago, while acknowledging these principles’ limitations in protecting individual rights under AI. These limitations include bias, discrimination, a sense of losing control, threat of surveillance, fears of erosion of choice and free will, etc. Before a set of AI-proven data protection principles can be developed and agreed internationally, DPAs have turned to ethics as an interim measure. Unlike data protection principles, however, ethics are elusive to demonstrate at best and potentially impossible to agree upon at worst. This paper explains what issues are facing AI that led to the use of ethics as a data protection building block by DPAs. It then surveys worldwide effects in providing ethical guidance related to AI and identifies ethical impact assessment (EIA) as a way to demonstrate commitment. At the same time, the practical disjoint and reasons between knowing ethics and acting ethically are elaborated to illustrate the challenges. The paper concludes with discussions on how AI practitioners should continuously monitor public sentiment, government initiatives and regulatory frameworks, and take proactive action in conducting EIA to demonstrate their commitment to and respect of ethics.
Keywords: Artificial Intelligence, data analytics, data protection, privacy, ethics
Suggested Citation: Suggested Citation