PSNet: Perspective-sensitive convolutional network for object detection

Xin Zhang, Yicheng Liu, Chunlei Huo*, Nuo Xu, Lingfeng Wang, Chunhong Pan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

Multi-view object detection is challenging due to the influence of the different view-angles on intra-class similarity. The uniformed feature representation of traditional detectors couples the object's perspective attribute and semantic feature, and the variances of perspective will cause intra-class differences. In this paper, a robust perspective-sensitive network (PSNet) is proposed to overcome the above problem. The uniformed feature is replaced by the perspective-specific structural feature, which makes the network perspective sensitive. Its essence is to learn multiple perspective spaces. In each perspective space, the semantic feature is decoupled from the perspective attribute and is robust to perspective variances. Perspective-sensitive RoI pooling and loss function are proposed for perspective-sensitive learning. Experiments on Pascal3D + and SpaceNet MOVI show the effectiveness and superiority of the PSNet.

Original languageEnglish
Pages (from-to)384-395
Number of pages12
JournalNeurocomputing
Volume468
DOIs
Publication statusPublished - 11 Jan 2022
Externally publishedYes

Keywords

  • Object detection
  • Perspective-sensitive
  • Structural neural network

Fingerprint

Dive into the research topics of 'PSNet: Perspective-sensitive convolutional network for object detection'. Together they form a unique fingerprint.

Cite this