Disclosures & Disclaimers: Investigating the Impact of Transparency Disclosures and Reliability Disclaimers on Learner-LLM Interactions
13 Pages Posted:
Date Written: August 29, 2024
Abstract
Large Language Models (LLMs) are increasingly being used in educational settings to assist students with assignments and learning new concepts. For LLMs to be effective learning aids, students must develop appropriate levels of trust and reliance on these tools. Misaligned trust and reliance can lead to suboptimal learning outcomes and decreased engagement with LLMs. Despite their growing presence, there is limited understanding of how to achieve optimal transparency and reliance calibration in the educational use of LLMs. In a 3x2 between-subjects experiment conducted in a university classroom, we tested the effect of two transparency disclosures (System Prompt and Goal Summary) and an in-conversation Reliability Disclaimer on a GPT-4-based chatbot tutor provided to students for an assignment. Our findings suggest that disclaimer messages included in responses may effectively mitigate learners' overreliance on the LLM Tutor when incorrect advice is given. While transparency disclosures did not significantly affect performance, seeing the System Prompt appeared to calibrate students' confidence in their answers and reduce the frequency of copy-pasting the exact assignment question to the LLM Tutor. Further student feedback indicated that they would prefer to receive guaranteed reliability of LLM tools, tutorials demonstrating effective prompting techniques, and transparency around performance-based metrics. Our work provides empirical insights into the design of transparency and reliability mechanisms for using LLMs in classroom settings.
Suggested Citation: Suggested Citation