Multi-human Parsing Based on Dynamic Convolution

Min Yan, Guoshan Zhang, Tong Zhang, Yueming Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Multi-human parsing is attracting more and more attention due to its wide application, which not only needs to differentiate different human instances but also categorizes each pixel within the human. In this work, we improve the performance of the mask prediction branch of Nondiscriminatory Treatment between Humans and Parts for Human Parsing (NTHP), which regards both the humans and parts as objects and directly executes instance segmentation on both of them using the same structure. Specifically, we learn the mask feature and the mask kernel separately, and the mask feature is convolved by the mask kernel to obtain the binary mask prediction. Besides, to obtain translation-variance and better performance, we design a Position-sensitive Global Con (PGC) block and insert it into the mask feature module. Experiments show that our network performs superiorly against state-of-the-art methods on the MHP v2.0 and PASCAL-Person-Part datasets.

Original languageEnglish
Title of host publicationProceedings of the 40th Chinese Control Conference, CCC 2021
EditorsChen Peng, Jian Sun
PublisherIEEE Computer Society
Pages7185-7190
Number of pages6
ISBN (Electronic)9789881563804
DOIs
Publication statusPublished - 26 Jul 2021
Externally publishedYes
Event40th Chinese Control Conference, CCC 2021 - Shanghai, China
Duration: 26 Jul 202128 Jul 2021

Publication series

NameChinese Control Conference, CCC
Volume2021-July
ISSN (Print)1934-1768
ISSN (Electronic)2161-2927

Conference

Conference40th Chinese Control Conference, CCC 2021
Country/TerritoryChina
CityShanghai
Period26/07/2128/07/21

Keywords

  • Attention mechanism
  • Dynamic convolution
  • Instance segmentation
  • Multi-human parsing

Fingerprint

Dive into the research topics of 'Multi-human Parsing Based on Dynamic Convolution'. Together they form a unique fingerprint.

Cite this