TY - JOUR
T1 - Data-driven bi-level predictive energy management strategy for fuel cell buses with algorithmics fusion
AU - Li, Menglin
AU - Liu, Haoran
AU - Yan, Mei
AU - Wu, Jingda
AU - Jin, Lisheng
AU - He, Hongwen
N1 - Publisher Copyright:
© 2023 The Authors
PY - 2023/10
Y1 - 2023/10
N2 - This paper aims to answer how to effectively integrate the data-driven method into the traditional predictive energy management algorithm rather than replacing it outright. Given the challenge of selecting an appropriate prediction horizon for predictive energy management, this study seeks to bridge traditional predictive energy management with machine learning approaches, thereby presenting a novel bi-level predictive energy management strategy for fuel cell buses with multi-prediction horizons. In the upper layer, the core parameter, prediction horizon, of the traditional model predictive control energy management framework is optimized using two distinct data-driven methods. The first method employs deep learning to establish a mapping relationship between the vehicle states and the optimal prediction horizon through deep neural networks. The second method utilizes reinforcement learning to obtain the best prediction horizon under varying vehicle states through intelligent agent exploration. In the lower level, predictive energy management is performed on fuel cell buses based on optimization levels. Finally, the proposed strategy is validated using test data from actual fuel cell buses. The results demonstrate that two data-driven methods, based on the optimal ΔSoC approximation and the deep reinforcement learning, can select the appropriate prediction horizon more conducive to energy saving according to the vehicle states. Regarding energy consumption, the multi-horizon predictive energy management based on deep reinforcement learning exhibits a remarkable reduction in energy consumption by 7.62 %, 4.55 %, 4.60 %, and 7.80 %, when compared with the predictive energy management employing fixed prediction horizons of 5 s, 10 s, 15 s, and 20 s, respectively. Furthermore, it outperforms the multi-horizon predictive energy management approach based on the optimal ΔSoC approximation by 3.59 %.
AB - This paper aims to answer how to effectively integrate the data-driven method into the traditional predictive energy management algorithm rather than replacing it outright. Given the challenge of selecting an appropriate prediction horizon for predictive energy management, this study seeks to bridge traditional predictive energy management with machine learning approaches, thereby presenting a novel bi-level predictive energy management strategy for fuel cell buses with multi-prediction horizons. In the upper layer, the core parameter, prediction horizon, of the traditional model predictive control energy management framework is optimized using two distinct data-driven methods. The first method employs deep learning to establish a mapping relationship between the vehicle states and the optimal prediction horizon through deep neural networks. The second method utilizes reinforcement learning to obtain the best prediction horizon under varying vehicle states through intelligent agent exploration. In the lower level, predictive energy management is performed on fuel cell buses based on optimization levels. Finally, the proposed strategy is validated using test data from actual fuel cell buses. The results demonstrate that two data-driven methods, based on the optimal ΔSoC approximation and the deep reinforcement learning, can select the appropriate prediction horizon more conducive to energy saving according to the vehicle states. Regarding energy consumption, the multi-horizon predictive energy management based on deep reinforcement learning exhibits a remarkable reduction in energy consumption by 7.62 %, 4.55 %, 4.60 %, and 7.80 %, when compared with the predictive energy management employing fixed prediction horizons of 5 s, 10 s, 15 s, and 20 s, respectively. Furthermore, it outperforms the multi-horizon predictive energy management approach based on the optimal ΔSoC approximation by 3.59 %.
KW - Data-driven
KW - Energy management
KW - Fuel cell buses
KW - Multi-prediction horizons
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85163940161&partnerID=8YFLogxK
U2 - 10.1016/j.ecmx.2023.100414
DO - 10.1016/j.ecmx.2023.100414
M3 - Article
AN - SCOPUS:85163940161
SN - 2590-1745
VL - 20
JO - Energy Conversion and Management: X
JF - Energy Conversion and Management: X
M1 - 100414
ER -