TY - JOUR
T1 - Knowledge-Guided Deep Reinforcement Learning for Multiobjective Energy Management of Fuel Cell Electric Vehicles
AU - Li, Xinyu
AU - He, Hongwen
AU - Wu, Jingda
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2025
Y1 - 2025
N2 - In the realm of fuel cell electric vehicles (FCEVs), the deep reinforcement learning (DRL) technique is increasingly recognized as a key technique for developing effective energy management strategies. Conventional DRL methods, however, struggle in complex multiobjective management tasks due to their limited training efficiency and optimization capacity. In this study, we introduce a novel knowledge-guided DRL approach that significantly improves DRL's performance in complex energy management scenarios. By modifying the learning objective, this method successfully incorporates expert knowledge into the DRL algorithm, leading to enhanced learning efficiency and more effective multiobjective optimization. Our approach, which aims to optimize hydrogen consumption, reduce fuel cell (FC) and lithium battery aging costs, and maintain battery sustainability, is based on a solid FC model and the advanced DRL algorithm, supplemented with expert knowledge from a rule-based energy management strategies (EMS). The proposed method has been thoroughly tested in a variety of scenarios, both familiar and beyond its training range. The results show that our knowledge-guided DRL-based EMS outperforms current advanced standard and knowledge-integrated DRL methods in terms of learning efficiency, optimization ability, and adaptability. Notably, it also surpasses the expert strategy, improving FCEVs' driving economy by 2.8%-7.5%. This study advances the use of DRL in FCEV energy management and establishes a new standard for incorporating expert knowledge into multiobjective optimization.
AB - In the realm of fuel cell electric vehicles (FCEVs), the deep reinforcement learning (DRL) technique is increasingly recognized as a key technique for developing effective energy management strategies. Conventional DRL methods, however, struggle in complex multiobjective management tasks due to their limited training efficiency and optimization capacity. In this study, we introduce a novel knowledge-guided DRL approach that significantly improves DRL's performance in complex energy management scenarios. By modifying the learning objective, this method successfully incorporates expert knowledge into the DRL algorithm, leading to enhanced learning efficiency and more effective multiobjective optimization. Our approach, which aims to optimize hydrogen consumption, reduce fuel cell (FC) and lithium battery aging costs, and maintain battery sustainability, is based on a solid FC model and the advanced DRL algorithm, supplemented with expert knowledge from a rule-based energy management strategies (EMS). The proposed method has been thoroughly tested in a variety of scenarios, both familiar and beyond its training range. The results show that our knowledge-guided DRL-based EMS outperforms current advanced standard and knowledge-integrated DRL methods in terms of learning efficiency, optimization ability, and adaptability. Notably, it also surpasses the expert strategy, improving FCEVs' driving economy by 2.8%-7.5%. This study advances the use of DRL in FCEV energy management and establishes a new standard for incorporating expert knowledge into multiobjective optimization.
KW - Deep reinforcement learning (DRL)
KW - energy management
KW - fuel cell electric vehicle (FCEV)
KW - knowledge incorporation
KW - multiobjective optimization
UR - http://www.scopus.com/inward/record.url?scp=85197551937&partnerID=8YFLogxK
U2 - 10.1109/TTE.2024.3421342
DO - 10.1109/TTE.2024.3421342
M3 - Article
AN - SCOPUS:85197551937
SN - 2332-7782
VL - 11
SP - 2344
EP - 2355
JO - IEEE Transactions on Transportation Electrification
JF - IEEE Transactions on Transportation Electrification
IS - 1
ER -