Multi-agent reinforcement learning enabling dynamic pricing policy for charging station operators

Ye Han, Xuefei Zhang, Jian Zhang, Qimei Cui, Shuo Wang, Zhu Han

Research output: Contribution to journalConference articlepeer-review

8 Citations (Scopus)

Abstract

The development of plug-in electric vehicles (PEVs) brings lucrative opportunities for charging station operators (CSOs). To attract more CSOs to the PEV market, provision of reasonable pricing policy is of great importance. However, dynamic environments and uncertain behavior of competitors make the pricing problem of CSOs challenging. In this paper, we focus on the dynamic pricing policy for maximizing the long-term profits of CSOs. Firstly, we propose a hierarchical framework to describe the economic association of PEV market, which is composed of smart grid, CSOs and charging stations (CSs) serving PEVs from top to bottom. Next, we leverage the Markov game to model the layer of CSOs as a competitive market. Finally, we design a dynamic pricing policy algorithm (DPPA) based on multi-agent reinforcement learning to achieve higher long-term profits of CSOs. Based on the real data of PEVs in Beijing, the experiment results show that DPPA has a significant improvement in long-term profit of CSOs, and the improvement gains increase over time. Moreover, DPPA can reduce the profit loss of CSOs effectively while involving more competitors.

Original languageEnglish
Article number9013999
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
Publication statusPublished - 2019
Event2019 IEEE Global Communications Conference, GLOBECOM 2019 - Waikoloa, United States
Duration: 9 Dec 201913 Dec 2019

Keywords

  • Charging station operators
  • Competitive market
  • Hierarchical framework
  • Multi-agent reinforcement learning
  • Pricing policy

Fingerprint

Dive into the research topics of 'Multi-agent reinforcement learning enabling dynamic pricing policy for charging station operators'. Together they form a unique fingerprint.

Cite this