Exploring the Concept of Explainable AI and Developing Information Governance Standards for Enhancing Trust and Transparency in Handling Customer Data
25 Pages Posted: 3 Jul 2024
Date Written: June 27, 2024
Abstract
The increasing integration of Artificial Intelligence (AI) systems in diverse sectors has raised concerns regarding transparency, trust, and ethical data handling. This study investigates the impact of Explainable AI (XAI) models and robust information governance standards on enhancing trust, transparency, and ethical use of customer data. A mixed-methods approach was employed, combining a comprehensive literature review with a survey of 342 respondents across various industries. The findings reveal that the implementation of XAI significantly increases user trust in AI systems compared to black-box models. Additionally, a strong positive correlation was found between XAI adoption and the ethical use of customer data, highlighting the importance of transparency frameworks and governance mechanisms. Furthermore, the study underscores the critical role of user education in fostering trust and facilitating informed decision-making regarding AI interactions. The results emphasize the need for organizations to prioritize the integration of XAI techniques, establish robust information governance frameworks, invest in user education, and foster a culture of transparency and ethical data use. These recommendations provide a roadmap for organizations to harness the benefits of AI while mitigating potential risks and ensuring responsible and trustworthy AI practices.
Keywords: Explainable AI, information governance, trust, transparency, ethical data use
Suggested Citation: Suggested Citation