A Novel Multi-Step-Prompt Approach for LLM-based Q&As on Banking Supervisory Regulations
26 Pages Posted: 13 May 2025
Date Written: April 23, 2025
Abstract
This paper investigates the use of large language models (LLMs) in analysing and answering questions related to banking supervisory regulations. We propose a multi-step-prompt approach that enriches the context provided to the LLM with relevant articles from the Capital Requirements Regulation (CRR). We compare our method against standard ‘zero-shot’ prompting, where the LLM answers are solely based on its pre-trained knowledge, and a standard ‘few-shot’ prompting, where the LLM is given only a limited number of examples of questions and answers to draw on each time. To assess the quality of the answers returned by the LLM, we also build an ‘LLM evaluator’ which, for each question, compares the correctness and completeness of the answer resulting from our multi-step prompt approach and from the two standard prompting methods with the official answer made available by the European Banking Authority (EBA), which is taken as a benchmark. Our findings on inquiries concerning Liquidity Risk rules indicate that our multi-step approach significantly improves the quality of LLM-generated answers, offering the analyst a valuable starting point to formulate appropriate answers to particularly complex questions.
Keywords: regulatory Q&A, banking supervisory regulation, Artificial Intelligence, GenAI, GPT-4o, RAG, LLM evaluator
JEL Classification: C45, C63, G21, G28
Suggested Citation: Suggested Citation