TY - JOUR
T1 - Reinforcement learning under temporal logic constraints as a sequence modeling problem
AU - Tian, Daiying
AU - Fang, Hao
AU - Yang, Qingkai
AU - Yu, Haoyong
AU - Liang, Wenyu
AU - Wu, Yan
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2023/3
Y1 - 2023/3
N2 - Reinforcement learning (RL) under temporal logic typically suffers from slow propagation for credit assignment. Inspired by recent advancements called trajectory transformer in machine learning, the reinforcement learning under Temporal Logic (TL) is modeled as a sequence modeling problem in this paper, where an agent utilizes the transformer to fit the optimal policy satisfying the Finite Linear Temporal Logic (LTLf) tasks. To combat the sparse reward issue, dense reward functions for LTLf are designed. For the sake of reducing the computational complexity, a sparse transformer with local and global attention is constructed to automatically conduct credit assignment, which removes the time-consuming value iteration process. The optimal action is found by the beam search performed in transformers. The proposed method generates a series of policies fitted by sparse transformers, which has sustainably high accuracy in fitting the demonstrations. At last, the effectiveness of the proposed method is demonstrated by simulations in Mini-Grid environments.
AB - Reinforcement learning (RL) under temporal logic typically suffers from slow propagation for credit assignment. Inspired by recent advancements called trajectory transformer in machine learning, the reinforcement learning under Temporal Logic (TL) is modeled as a sequence modeling problem in this paper, where an agent utilizes the transformer to fit the optimal policy satisfying the Finite Linear Temporal Logic (LTLf) tasks. To combat the sparse reward issue, dense reward functions for LTLf are designed. For the sake of reducing the computational complexity, a sparse transformer with local and global attention is constructed to automatically conduct credit assignment, which removes the time-consuming value iteration process. The optimal action is found by the beam search performed in transformers. The proposed method generates a series of policies fitted by sparse transformers, which has sustainably high accuracy in fitting the demonstrations. At last, the effectiveness of the proposed method is demonstrated by simulations in Mini-Grid environments.
KW - Reinforcement learning
KW - Sparse attention
KW - Temporal logic
KW - Trajectory transformer
UR - http://www.scopus.com/inward/record.url?scp=85146056790&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2022.104351
DO - 10.1016/j.robot.2022.104351
M3 - Article
AN - SCOPUS:85146056790
SN - 0921-8890
VL - 161
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
M1 - 104351
ER -