Reinforcement learning under temporal logic constraints as a sequence modeling problem

Daiying Tian, Hao Fang*, Qingkai Yang, Haoyong Yu, Wenyu Liang, Yan Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Reinforcement learning (RL) under temporal logic typically suffers from slow propagation for credit assignment. Inspired by recent advancements called trajectory transformer in machine learning, the reinforcement learning under Temporal Logic (TL) is modeled as a sequence modeling problem in this paper, where an agent utilizes the transformer to fit the optimal policy satisfying the Finite Linear Temporal Logic (LTLf) tasks. To combat the sparse reward issue, dense reward functions for LTLf are designed. For the sake of reducing the computational complexity, a sparse transformer with local and global attention is constructed to automatically conduct credit assignment, which removes the time-consuming value iteration process. The optimal action is found by the beam search performed in transformers. The proposed method generates a series of policies fitted by sparse transformers, which has sustainably high accuracy in fitting the demonstrations. At last, the effectiveness of the proposed method is demonstrated by simulations in Mini-Grid environments.

Original languageEnglish
Article number104351
JournalRobotics and Autonomous Systems
Volume161
DOIs
Publication statusPublished - Mar 2023

Keywords

  • Reinforcement learning
  • Sparse attention
  • Temporal logic
  • Trajectory transformer

Fingerprint

Dive into the research topics of 'Reinforcement learning under temporal logic constraints as a sequence modeling problem'. Together they form a unique fingerprint.

Cite this