Locomotion Policy Learning via Diffusion Policy

Yubiao Ma, Xuemei Ren*, Dongdong Zheng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The emergence of deep reinforcement learning has recently led to remarkable achievements in legged locomotion. Compared to traditional model-based approaches, reinforcement learning-based control methods can improve robustness and generalization in the face of environmental uncertainties. However, due to the complexity of the locomotion policy, the learned gaits are generally conservative and lack naturalness. In this paper, we propose a novel framework for learning locomotion policy that results in gaits characterized by both robustness and generalization. We incorporate a diffusion model into our policy learning framework for legged locomotion. The diffusion model powerfully represents policy, leading to multimodal action distributions and sufficient exploration.

Original languageEnglish
Title of host publicationProceedings of 2024 Chinese Intelligent Systems Conference
EditorsYingmin Jia, Weicun Zhang, Yongling Fu, Huihua Yang
PublisherSpringer Science and Business Media Deutschland GmbH
Pages681-690
Number of pages10
ISBN (Print)9789819786572
DOIs
Publication statusPublished - 2024
Event20th Chinese Intelligent Systems Conference, CISC 2024 - Guilin, China
Duration: 26 Oct 202427 Oct 2024

Publication series

NameLecture Notes in Electrical Engineering
Volume1285 LNEE
ISSN (Print)1876-1100
ISSN (Electronic)1876-1119

Conference

Conference20th Chinese Intelligent Systems Conference, CISC 2024
Country/TerritoryChina
CityGuilin
Period26/10/2427/10/24

Keywords

  • Diffusion model
  • Legged robots
  • Reinforcement learning

Cite this