Enhancing Moving Object Segmentation with Spatio-Temporal Information Fusion

Siyu Chen, Yilei Huang, Qilin Li, Ruosong Wang, Zhenhai Zhang*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Sensing moving objects accurately can provide information about dynamic changes in the environment, while further segmentation can help autonomous systems make smarter decisions and better SLAM. Effective utilization of spatio-temporal information is paramount for LiDAR Moving Object Segmentation (LiDAR-MOS). We propose an efficient approach for attaining more accurate point cloud segmentation results by leveraging spatio-temporal information from multiple LiDAR scans and their corresponding poses. To be specific, using acquired pose information, we initially transform the point cloud data of the sequence into the coordinate system of the current frame. The aligned point clouds are then discretized to generate a special BEV-occupied representation. Subsequently, we employ a Spatio-Temporal Excitation (STE) module excite the spatio-temporal features of the superimposed representations and put into the spatio-temporal pyramid network (STPN) for dual-head decoding and result fusion. We trained and evaluated our network on the nuScenes dataset. The results of comparative and ablation studies demonstrate the advantage of our designed method.

源语言英语
主期刊名2024 IEEE International Conference on Mechatronics and Automation, ICMA 2024
出版商Institute of Electrical and Electronics Engineers Inc.
1783-1788
页数6
ISBN(电子版)9798350388060
DOI
出版状态已出版 - 2024
活动21st IEEE International Conference on Mechatronics and Automation, ICMA 2024 - Tianjin, 中国
期限: 4 8月 20247 8月 2024

出版系列

姓名2024 IEEE International Conference on Mechatronics and Automation, ICMA 2024

会议

会议21st IEEE International Conference on Mechatronics and Automation, ICMA 2024
国家/地区中国
Tianjin
时期4/08/247/08/24

指纹

探究 'Enhancing Moving Object Segmentation with Spatio-Temporal Information Fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此