A General-Purpose Deep Reinforcement Learning Approach for Dynamic Inventory Control
25 Pages Posted: 4 Apr 2024
Date Written: March 12, 2024
Abstract
Solving real-world inventory problems poses many challenges, including the estimation of uncertain parameters (e.g. demand), dealing with their dynamic nature that often results in prohibitive computational complexity, and handling objective functions that are mathematically complex and not well-behaved. Deep Reinforcement Learning (DRL)-based approaches carry the promise of addressing the aforementioned challenges and problems—they have been applied in other domains with similar challenges and were recently also applied to selected inventory management problems.
This paper presents a novel DRL-based approach to dynamic inventory control that can effectively
leverage contextual information (features) and is versatile in the sense that it can be successfully applied to various different types of (dynamic) inventory control problems. The versatility of our approach stems from its ability to handle complex (non-continuous) loss functions and to account for partial observability of the state space—an important property of many dynamic inventory control problems that has so far not received adequate attention. Based on a large set of numerical studies for three distinct problems (Newsvendor, Lost Sales, and Multi-Period Fixed-Cost problem), we can show that our approach is not only versatile but also leads to superior performance compared to state-of-the-art benchmarks.
Keywords: Dynamic inventory control, prescriptive analytics, machine learning, reinforcement learning
JEL Classification: M11
Suggested Citation: Suggested Citation