TY - JOUR
T1 - A multi-step on-policy deep reinforcement learning method assisted by off-policy policy evaluation
AU - Zhang, Huaqing
AU - Ma, Hongbin
AU - Mersha, Bemnet Wondimagegnehu
AU - Jin, Ying
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
PY - 2024
Y1 - 2024
N2 - On-policy deep reinforcement learning (DRL) has the inherent advantage of using multi-step interaction data for policy learning. However, on-policy DRL still faces challenges in improving the sample efficiency of policy evaluations. Therefore, we propose a multi-step on-policy DRL method assisted by off-policy policy evaluation (abbreviated as MSOAO), whichs integrates on-policy and off-policy policy evaluations and belongs to a new type of DRL method. We propose a low-pass filtering algorithm for state-values to perform off-policy policy evaluation and make it efficiently assist on-policy policy evaluation. The filtered state-values and the multi-step interaction data are used as the input of the V-trace algorithm. Then, the state-value function is learned by simultaneously approximating the target state-values obtained from the V-trace output and the action-values of the current policy. The action-value function is learned by using the one-step bootstrapping algorithm to approximate the target action-values obtained from the V-trace output. Extensive evaluation results indicate that MSOAO outperformed the performance of state-of-the-art on-policy DRL algorithms, and the simultaneous learning of the state-value function and the action-value function in MSOAO can promote each other, thus improving the learning capability of the algorithm.
AB - On-policy deep reinforcement learning (DRL) has the inherent advantage of using multi-step interaction data for policy learning. However, on-policy DRL still faces challenges in improving the sample efficiency of policy evaluations. Therefore, we propose a multi-step on-policy DRL method assisted by off-policy policy evaluation (abbreviated as MSOAO), whichs integrates on-policy and off-policy policy evaluations and belongs to a new type of DRL method. We propose a low-pass filtering algorithm for state-values to perform off-policy policy evaluation and make it efficiently assist on-policy policy evaluation. The filtered state-values and the multi-step interaction data are used as the input of the V-trace algorithm. Then, the state-value function is learned by simultaneously approximating the target state-values obtained from the V-trace output and the action-values of the current policy. The action-value function is learned by using the one-step bootstrapping algorithm to approximate the target action-values obtained from the V-trace output. Extensive evaluation results indicate that MSOAO outperformed the performance of state-of-the-art on-policy DRL algorithms, and the simultaneous learning of the state-value function and the action-value function in MSOAO can promote each other, thus improving the learning capability of the algorithm.
KW - Deep reinforcement learning
KW - Low-pass filter
KW - On-policy and off-policy
KW - Policy evaluation
KW - Policy gradient
UR - http://www.scopus.com/inward/record.url?scp=85203382385&partnerID=8YFLogxK
U2 - 10.1007/s10489-024-05508-9
DO - 10.1007/s10489-024-05508-9
M3 - Article
AN - SCOPUS:85203382385
SN - 0924-669X
JO - Applied Intelligence
JF - Applied Intelligence
ER -