Cooperative bus holding and stop-skipping: A deep reinforcement learning framework
Document Type
Journal Article
Publication Date
2023
Subject Area
place - north america, place - urban, mode - bus, operations - coordination, technology - intelligent transport systems, planning - service improvement, planning - service level
Keywords
Bus control, multi-agent reinforcement learning (MARL)
Abstract
The bus control problem that combines holding and stop-skipping strategies is formulated as a multi-agent reinforcement learning (MARL) problem. Traditional MARL methods, designed for settings with joint action-taking, are incompatible with the asynchronous nature of at-stop control tasks. On the other hand, using a fully decentralized approach leads to environment non-stationarity, since the state transition of an individual agent may be distorted by the actions of other agents. To address it, we propose a design of the state and reward function that increases the observability of the impact of agents’ actions during training. An event-based mesoscopic simulation model is built to train the agents. We evaluate the proposed approach in a case study with a complex route from the Chicago transit network. The proposed method is compared to a standard headway-based control and a policy trained with MARL but with no cooperative learning. The results show that the proposed method not only improves level of service but it is also more robust towards uncertainties in operations such as travel times and operator compliance with the recommended action.
Rights
Permission to publish the abstract has been given by Elsevier, copyright remains with them.
Recommended Citation
Rodriguez, J., Koutsopoulos, H. N., Wang, S., & Zhao, J. (2023). Cooperative bus holding and stop-skipping: A deep reinforcement learning framework. Transportation Research Part C: Emerging Technologies, 155, 104308.
Comments
Transportation Research Part C Home Page:
http://www.sciencedirect.com/science/journal/0968090X