Math Education with Large Language Models: Peril or Promise?

11 Pages Posted: 1 Dec 2023 Last revised: 14 Nov 2024

See all articles by Harsh Kumar

Harsh Kumar

University of Toronto - Department of Computer Science

David M. Rothschild

Microsoft Research

Daniel G. Goldstein

Microsoft Corporation - Microsoft Research, New York City

Jake M. Hofman

Microsoft Research, New York City

Date Written: November 22, 2023

Abstract

The widespread availability of large language models (LLMs) has provoked both fear and excitement in education. On one hand, there is the concern that students will offload their coursework to LLMs, limiting what they themselves learn. On the other hand, there is the hope that LLMs might serve as scalable, personalized tutors. We conducted a set of large, pre-registered experiments to investigate how LLM-based explanations affect learning.  Participants first practiced high school-level math problems where we varied the assistance they received. Then all participants were asked to answer new but similar test questions without any assistance. In the first experiment, we found that LLM-based explanations positively impacted learning relative to seeing only correct answers, and that benefits were largest for those who attempted problems on their own first before consulting LLM explanations. In the second experiment, results showed that despite the presence of arithmetic errors and incorrect final answers, learning gains from incorrect LLM-based explanations fell in between those from seeing only the correct answer and seeing correct LLM-based explanations. These results suggest that, when used appropriately, LLM explanations can foster learning gains and that certain types of errors may not be as detrimental as previously thought.

Keywords: education, large language models, llms, AI, generative AI, math, tutoring, human-AI interaction

Suggested Citation

Kumar, Harsh and Rothschild, David M. and Goldstein, Daniel G. and Hofman, Jake, Math Education with Large Language Models: Peril or Promise? (November 22, 2023). Available at SSRN: https://ssrn.com/abstract=4641653 or http://dx.doi.org/10.2139/ssrn.4641653

Harsh Kumar

University of Toronto - Department of Computer Science ( email )

Sandford Fleming Building
King’s College Road, Room 3302
Toronto, Ontario M5S 3G4
Canada

David M. Rothschild

Microsoft Research ( email )

New York City, NY NY 10011
United States

Daniel G. Goldstein

Microsoft Corporation - Microsoft Research, New York City ( email )

300 Lafayette St
New York, NY NY 10012
United States

Jake Hofman (Contact Author)

Microsoft Research, New York City ( email )

300 Lafayette St
New York, NY 10012
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
4,337
Abstract Views
21,745
Rank
4,928
PlumX Metrics