On the Reliability of Large Language Models to Misinformed and Demographically-Informed Prompts
11 Pages Posted: 26 Jan 2024 Last revised: 29 Apr 2024
Abstract
We investigate and observe the behaviour and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution.
Note:
Funding Information: We declare that our study received no external funding. It was conducted independently by the authors.
Conflict of Interests: We confirm that there are no competing interests among the authors. This has been explicitly stated in a document attached to our submission to the journal "Computers in Human Behavior." We reiterate here that there are no financial, personal, or professional conflicts that could be construed as influencing the research.
Keywords: Large Language Models, AI for Climate Change, AI for Mental Health, Responsible AI
Suggested Citation: Suggested Citation