Semantic motion segmentation for urban dynamic scene understanding

Qiu Fan, Yang Yi*, Li Hao, Fu Mengyin, Wang Shunting

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Citations (Scopus)

Abstract

A mount of recent researches on scene parsing and semantic labeling, while few focus on obtaining joint semantic motion labeling. In this paper, we propose an approach to infer both the object class and motion status for each pixel of images. First, we extract and match sparse image features to estimate ego-motion between two consecutive stereo images, the result of feature points grouping is used to segment moving object in U-disparity map. Second, a Fully Convolutional Neural Network is employed for semantic segmentation. Moreover, semantic cues are utilized to remove pixels have no potential to be moved in motion mask. Finally, we use a fully connected CRF to integrate motion into semantic segmentation. To validate the effectiveness of the proposed algorithm, we present experimental results with KITTI stereo images that contain moving objects.

Original languageEnglish
Title of host publication2016 IEEE International Conference on Automation Science and Engineering, CASE 2016
PublisherIEEE Computer Society
Pages497-502
Number of pages6
ISBN (Electronic)9781509024094
DOIs
Publication statusPublished - 14 Nov 2016
Event2016 IEEE International Conference on Automation Science and Engineering, CASE 2016 - Fort Worth, United States
Duration: 21 Aug 201624 Aug 2016

Publication series

NameIEEE International Conference on Automation Science and Engineering
Volume2016-November
ISSN (Print)2161-8070
ISSN (Electronic)2161-8089

Conference

Conference2016 IEEE International Conference on Automation Science and Engineering, CASE 2016
Country/TerritoryUnited States
CityFort Worth
Period21/08/1624/08/16

Fingerprint

Dive into the research topics of 'Semantic motion segmentation for urban dynamic scene understanding'. Together they form a unique fingerprint.

Cite this