Learning-Augmented Vehicle Dispatching with Slack Times for High-Capacity Ride-Pooling
30 Pages Posted: 20 Apr 2024
Abstract
Pooled Mobility-on-demand (MoD) services have emerged with numerous benefits, including reducing carbon emissions and providing affordable transportation options. However, developing request-vehicle matching algorithms for pooling services poses more challenges compared to ride-hailing services, due to the additional complexities of shared rides. One significant challenge is to decide the optimal maximum detour time to allocate to a ride---a larger buffer enhances the possibility of accommodating extra passengers en route, but creates a larger inconvenience to the riders on board due to the increased travel time. In response to this challenge, our research presents a matching algorithm designed to determine request-specific slack times as well as the optimal matching pairs. Our framework is built upon the state-of-the-art centralized online matching algorithm with a sequential decision making process. A key contribution of our approach lies in its ability to measure and integrate the future value and acceptance probability of detour values/assignments. To this end, we leverage both discrete choice modeling and reinforcement learning framework, effectively balancing between the immediate acceptance rate of passengers and the future potential to serve more passengers. To validate our approach, extensive experiments were conducted within the Manhattan network using real-world data. Our matching algorithm consistently outperforms baseline models, and the trained model facilitates a revenue increase of up to 15.1%. Furthermore, we show the generalizability of our model through zero-shot transfer performance and scalability up to 500 vehicles and 5,000 customers. This research offers a promising avenue for addressing some of the challenges faced by pooled MoD, ultimately contributing to more efficient and preferable transportation solutions in the future.
Keywords: High Capacity Mobility-on-Demand, Ride Pooling, Request-Vehicle Matching Algorithm, Reinforcement Learning, Discrete Choice Modeling
Suggested Citation: Suggested Citation