Reinforcement learning under temporal logic constraints as a sequence modeling problem

Daiying Tian, Hao Fang*, Qingkai Yang, Haoyong Yu, Wenyu Liang, Yan Wu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Reinforcement learning (RL) under temporal logic typically suffers from slow propagation for credit assignment. Inspired by recent advancements called trajectory transformer in machine learning, the reinforcement learning under Temporal Logic (TL) is modeled as a sequence modeling problem in this paper, where an agent utilizes the transformer to fit the optimal policy satisfying the Finite Linear Temporal Logic (LTLf) tasks. To combat the sparse reward issue, dense reward functions for LTLf are designed. For the sake of reducing the computational complexity, a sparse transformer with local and global attention is constructed to automatically conduct credit assignment, which removes the time-consuming value iteration process. The optimal action is found by the beam search performed in transformers. The proposed method generates a series of policies fitted by sparse transformers, which has sustainably high accuracy in fitting the demonstrations. At last, the effectiveness of the proposed method is demonstrated by simulations in Mini-Grid environments.

源语言英语
文章编号104351
期刊Robotics and Autonomous Systems
161
DOI
出版状态已出版 - 3月 2023

指纹

探究 'Reinforcement learning under temporal logic constraints as a sequence modeling problem' 的科研主题。它们共同构成独一无二的指纹。

引用此