Lightweight Multiscale Spatiotemporal Locally Connected Graph Convolutional Networks for Single Human Motion Forecasting

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Human motion forecasting is an important and challenging task in many computer vision application domains. Recent work concentrates on utilizing the timing processing ability of recurrent neural networks (RNNs) to achieve smooth and reliable results in short-term prediction. However, as evidenced by previous works, RNNs suffer from error accumulation, leading to unreliable results. In this paper, we propose a simple feed-forward deep neural network for motion prediction, which takes into account temporal smoothness between frames and spatial dependencies between human body joints. We design Lightweight Multiscale Spatiotemporal Locally Connected Graph Convolutional Networks (MST-LCGCN) for Single Human Motion Forecasting to implicitly establish the spatiotemporal dependence in the process of human movement, where different scales fuse dynamically during training. The entire model is action-agnostic and follows a framework of encoder-decoder. The encoder consists of temporal GCNs (TGCNs) to capture motion features between frames and locally connected spatial GCNs (SGCNs) to extract spatial structure among joints. The decoder uses temporal convolution networks (TCNs) to maintain its extensibility for long-term prediction. Considerable experiments show that our approach outperforms previous methods on the Human3.6M and CMU Mocap datasets while only requiring much fewer parameters. <italic>Note to Practitioners</italic>&#x2014;Accuracy and real-time performance are the two most significant evaluation factors for the challenge of human motion forecasting. Existing methods tend to use models with a huge amount of parameters, sacrificing operation speed to obtain a small increase in accuracy. However, in practical scenarios, the slowdown in speed makes predictions meaningless. Therefore, we propose a lightweight MST-LCGCN network to learn human action patterns over time. To obtain higher accuracy, we extract features from the spatial and temporal dimensions to contain more information; to obtain faster operation speed, we design our network while reducing unnecessary depth as much as possible. We demonstrate the advantages of our model in terms of efficiency and accuracy through extensive quantitative and qualitative experiments on two datasets. Our network will be helpful for robots to avoid obstacles in advance and compensate for network delays, and we will apply them to real life in the future.

源语言英语
页(从-至)1-10
页数10
期刊IEEE Transactions on Automation Science and Engineering
DOI
出版状态已接受/待刊 - 2023

指纹

探究 'Lightweight Multiscale Spatiotemporal Locally Connected Graph Convolutional Networks for Single Human Motion Forecasting' 的科研主题。它们共同构成独一无二的指纹。

引用此