Prompting Science Report 1: Prompt Engineering is Complicated and Contingent
10 Pages Posted: 5 May 2025
Date Written: March 04, 2025
Abstract
This is the first of a series of short reports that seek to help business, education, and policy leaders understand the technical details of working with AI through rigorous testing. In this report, we demonstrate two things:
- There is no single standard for measuring whether a Large Language Model (LLM) passes a benchmark, and that choosing a standard has a big impact on how well the LLM does on that benchmark. The standard you choose will depend on your goals for using an LLM in a particular case.
- It is hard to know in advance whether a particular prompting approach will help or harm the LLM's ability to answer any particular question. Specifically, we find that sometimes being polite to the LLM helps performance, and sometimes it lowers performance. We also find that constraining the AI’s answers helps performance in some cases, though it may lower performance in other cases.
Taken together, this suggests that benchmarking AI performance is not one-size-fits-all, and also that particular prompting formulas or approaches, like being polite to the AI, are not universally valuable.
Keywords: llm, large language models, benchmarking
Suggested Citation: Suggested Citation
Meincke, Lennart and Mollick, Ethan R. and Mollick, Lilach and Shapiro, Dan, Prompting Science Report 1: Prompt Engineering is Complicated and Contingent (March 04, 2025). Available at SSRN: https://ssrn.com/abstract=5165270 or http://dx.doi.org/10.2139/ssrn.5165270
Do you have a job opening that you would like to promote on SSRN?
Feedback
Feedback to SSRN