Math Education with Large Language Models: Peril or Promise?
11 Pages Posted: 1 Dec 2023 Last revised: 14 Nov 2024
Date Written: November 22, 2023
Abstract
The widespread availability of large language models (LLMs) has provoked both fear and excitement in education. On one hand, there is the concern that students will offload their coursework to LLMs, limiting what they themselves learn. On the other hand, there is the hope that LLMs might serve as scalable, personalized tutors. We conducted a set of large, pre-registered experiments to investigate how LLM-based explanations affect learning. Participants first practiced high school-level math problems where we varied the assistance they received. Then all participants were asked to answer new but similar test questions without any assistance. In the first experiment, we found that LLM-based explanations positively impacted learning relative to seeing only correct answers, and that benefits were largest for those who attempted problems on their own first before consulting LLM explanations. In the second experiment, results showed that despite the presence of arithmetic errors and incorrect final answers, learning gains from incorrect LLM-based explanations fell in between those from seeing only the correct answer and seeing correct LLM-based explanations. These results suggest that, when used appropriately, LLM explanations can foster learning gains and that certain types of errors may not be as detrimental as previously thought.
Keywords: education, large language models, llms, AI, generative AI, math, tutoring, human-AI interaction
Suggested Citation: Suggested Citation