Deep Reinforcement Learning for Supply Chain Synchronization

Posted: 25 Apr 2025

See all articles by Ilya Jackson

Ilya Jackson

Massachusetts Institute of Technology (MIT) - Center for Transportation & Logistics

Date Written: January 01, 2022

Abstract

Supply chain synchronization can prevent the “bullwhip effect” and significantly mitigate ripple effects caused by operational failures. This paper demonstrates how deep reinforcement learning agents based on the proximal policy optimization algorithm can synchronize inbound and outbound flows if end-to-end visibility is provided. The paper concludes that the proposed solution has the potential to perform adaptive control in complex supply chains. Furthermore, the proposed approach is general, task unspecific, and adaptive in the sense that prior knowledge about the system is not required. © 2022 IEEE Computer Society. All rights reserved.

Suggested Citation

Jackson, Ilya, Deep Reinforcement Learning for Supply Chain Synchronization (January 01, 2022). MIT Center for Transportation & Logistics Research Paper No. 2022/022, Available at SSRN: https://ssrn.com/abstract=5230419

Ilya Jackson (Contact Author)

Massachusetts Institute of Technology (MIT) - Center for Transportation & Logistics ( email )

United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
72
PlumX Metrics