Intelligent Ramp Control for Incident Response Using Dyna-Q Architecture

Chao Lu*, Yanan Zhao, Jianwei Gong

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Reinforcement learning (RL) has shown great potential for motorway ramp control, especially under the congestion caused by incidents. However, existing applications limited to single-agent tasks and based on Q-learning have inherent drawbacks for dealing with coordinated ramp control problems. For solving these problems, a Dyna-Q based multiagent reinforcement learning (MARL) system named Dyna-MARL has been developed in this paper. Dyna-Q is an extension of Q-learning, which combines model-free and model-based methods to obtain benefits from both sides. The performance of Dyna-MARL is tested in a simulated motorway segment in the UK with the real traffic data collected from AM peak hours. The test results compared with Isolated RL and noncontrolled situations show that Dyna-MARL can achieve a superior performance on improving the traffic operation with respect to increasing total throughput, reducing total travel time and CO2 emission. Moreover, with a suitable coordination strategy, Dyna-MARL can maintain a highly equitable motorway system by balancing the travel time of road users from different on-ramps.

Original languageEnglish
Article number896943
JournalMathematical Problems in Engineering
Volume2015
DOIs
Publication statusPublished - 2015

Fingerprint

Dive into the research topics of 'Intelligent Ramp Control for Incident Response Using Dyna-Q Architecture'. Together they form a unique fingerprint.

Cite this