FMimic: Foundation models are fine-grained action learners from human videos

  • Guangyan Chen
  • , Meiling Wang
  • , Te Cui
  • , Yao Mu
  • , Haoyang Lu
  • , Zicai Peng
  • , Mengxiao Hu
  • , Tianxing Zhou
  • , Mengyin Fu
  • , Yi Yang
  • , Yufeng Yue*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in foundation models, particularly vision language models (VLMs), have demonstrated remarkable capabilities in visual and linguistic reasoning for VIL tasks. Despite this progress, existing approaches primarily utilize these models for learning high-level plans from human demonstrations, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck for robotic systems. In this work, we present FMimic, a novel paradigm that harnesses foundation models to directly learn generalizable skills at even fine-grained action levels, using only a limited number of human videos. Specifically, our approach first grounds human-object movements from demonstration videos, then employs a skill learner to delineate motion properties through keypoints and waypoints, acquiring fine-grained action skills via hierarchical constraint representations. In unseen scenarios, the learned skills are updated through keypoint transfer and iterative comparison within the skill adapter, enabling efficient skill adaptation. To achieve high-precision manipulation, the skill refiner optimizes the extracted and transferred interactions for enhanced precision, while employing iterative master-slave contact refinement for pose estimation, facilitating the acquisition and accomplishment of even highly constrained manipulation tasks. Our concise approach enables FMimic to effectively learn fine-grained actions from human videos, obviating the reliance on predefined primitives. Extensive experiments demonstrate that our FMimic delivers strong performance with a single human video, and significantly outperforms all other methods with five videos. Furthermore, our method exhibits significant improvements of over 39% and 29% in RLBench multi-task experiments and real-world manipulation tasks, respectively, and exceeds baselines by more than 34% in high-precision tasks and 47% in long-horizon tasks. Code and videos are available on our homepage.

Original languageEnglish
Article number02783649251377335
JournalInternational Journal of Robotics Research
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • code generation
  • multimodal language models
  • robotic manipulation
  • vision language models
  • visual imitation learning

Fingerprint

Dive into the research topics of 'FMimic: Foundation models are fine-grained action learners from human videos'. Together they form a unique fingerprint.

Cite this