Disclosures & Disclaimers: Investigating the Impact of Transparency Disclosures and Reliability Disclaimers on Learner-LLM Interactions

13 Pages Posted:

See all articles by Jessica Bo

Jessica Bo

affiliation not provided to SSRN

Harsh Kumar

University of Toronto - Department of Computer Science

Michael Liut

University of Toronto at Mississauga

Ashton Anderson

University of Toronto, Toronto, Canada

Date Written: August 29, 2024

Abstract

Large Language Models (LLMs) are increasingly being used in educational settings to assist students with assignments and learning new concepts. For LLMs to be effective learning aids, students must develop appropriate levels of trust and reliance on these tools. Misaligned trust and reliance can lead to suboptimal learning outcomes and decreased engagement with LLMs. Despite their growing presence, there is limited understanding of how to achieve optimal transparency and reliance calibration in the educational use of LLMs. In a 3x2 between-subjects experiment conducted in a university classroom, we tested the effect of two transparency disclosures (System Prompt and Goal Summary) and an in-conversation Reliability Disclaimer on a GPT-4-based chatbot tutor provided to students for an assignment. Our findings suggest that disclaimer messages included in responses may effectively mitigate learners' overreliance on the LLM Tutor when incorrect advice is given. While transparency disclosures did not significantly affect performance, seeing the System Prompt appeared to calibrate students' confidence in their answers and reduce the frequency of copy-pasting the exact assignment question to the LLM Tutor. Further student feedback indicated that they would prefer to receive guaranteed reliability of LLM tools, tutorials demonstrating effective prompting techniques, and transparency around performance-based metrics. Our work provides empirical insights into the design of transparency and reliability mechanisms for using LLMs in classroom settings.

Suggested Citation

Bo, Jessica and Kumar, Harsh and Liut, Michael and Anderson, Ashton, Disclosures & Disclaimers: Investigating the Impact of Transparency Disclosures and Reliability Disclaimers on Learner-LLM Interactions (August 29, 2024). Available at SSRN: https://ssrn.com/abstract=

Jessica Bo (Contact Author)

affiliation not provided to SSRN

Harsh Kumar

University of Toronto - Department of Computer Science

Sandford Fleming Building
King’s College Road, Room 3302
Toronto, Ontario M5S 3G4
Canada

Michael Liut

University of Toronto at Mississauga ( email )

Ashton Anderson

University of Toronto, Toronto, Canada ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
10
Abstract Views
30
PlumX Metrics