Challenging the Notion of Trust Around Chatgpt in the High-Stakes Use Case of Insurance
7 Pages Posted: 12 Sep 2023
The public discourse around (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of factors driving the trust judgment call is crucial to avoid unwarranted trust. In this commentary paper, we posit the idea of affording more specificity to this debate by challenging the notion of trust in LLM-based systems across the insurance industry. The concept and role of trust is germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice inheres a high degree of relevance to citizen welfare and numerous overspill effects on wider public policy areas. We therefore argue that a domain-specific approach to good governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with LLMs adoption adding to the socio-ethical risks. In light of this, the present commentary establishes a valuable baseline to support regulators and policymakers in the implementation of regulatory systems.
Keywords: AI governance, ChatGPT, insurance, Large language models, Trust
Suggested Citation: Suggested Citation