How to Build Trust in Answers Given by Generative AI for Specific and Vague Financial Questions
Journal of Electronic Business & Digital Economics, 2024[10.1108/JEBDE-11-2023-0028]
15 Pages Posted: 25 Sep 2024 Last revised: 26 Oct 2024
Date Written: February 01, 2024
Abstract
Purpose-Generative artificial intelligence (GenAI) has progressed in its ability and has seen explosive growth in adoption. However, the consumer's perspective on its use, particularly in specific scenarios such as financial advice, is unclear. This research develops a model of how to build trust in the advice given by GenAI when answering financial questions. Design/methodology/approach-The model is tested with survey data using structural equation modelling (SEM) and multi-group analysis (MGA). The MGA compares two scenarios, one where the consumer makes a specific question and one where a vague question is made. Findings-This research identifies that building trust for consumers is different when they ask a specific financial question in comparison to a vague one. Humanness has a different effect in the two scenarios. When a financial question is specific, human-like interaction does not strengthen trust, while (1) when a question is vague, humanness builds trust. The four ways to build trust in both scenarios are (2) human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness and finally (5) ease of use and support. Originality/value-This research contributes to a better understanding of the consumer's perspective when using GenAI for financial questions and highlights the importance of understanding GenAI in specific contexts from specific stakeholders.
Keywords: Trust, Privacy, Generative AI, AI, Large language models, LLM, Fintech, Finance, WealthTech Paper type Research paper
JEL Classification: G4, D14, Z22, E42, D11, D1, D83, E21, E44, G1, G17, G40, G41, M1, M15
Suggested Citation: Suggested Citation