TY - GEN
T1 - Point-Supervised Semantic Segmentation of Natural Scenes via Hyperspectral Imaging
AU - Ren, Tianqi
AU - Shen, Qiu
AU - Fu, Ying
AU - You, Shaodi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Natural scene semantic segmentation is an important task in computer vision. While training accurate models for semantic segmentation relies heavily on detailed and accurate pixel-level annotations, which are hard and time-consuming to be collected especially for complicated natural scenes. Weakly-supervised methods can reduce labeling cost greatly at the expense of significant performance degradation. In this paper, we explore the possibility of introducing hyperspectral imaging to improve the performance of weakly-supervised semantic segmentation. We take two challenging hyperspectral datasets of outdoor natural scenes as example, and randomly label dozens of points with semantic categories to conduct a point-supervised semantic segmentation benchmark. Then, a spectral and spatial fusion method is proposed to generate detailed pixel-level annotations, which are used to supervise the semantic segmentation models. With multiple experiments we find that hyperspectral information can be greatly helpful to point-supervised semantic segmentation as it is more distinctive than RGB. As a result, our proposed method with only point-supervision can achieve approximate performance of the fully-supervised method in many cases.1
AB - Natural scene semantic segmentation is an important task in computer vision. While training accurate models for semantic segmentation relies heavily on detailed and accurate pixel-level annotations, which are hard and time-consuming to be collected especially for complicated natural scenes. Weakly-supervised methods can reduce labeling cost greatly at the expense of significant performance degradation. In this paper, we explore the possibility of introducing hyperspectral imaging to improve the performance of weakly-supervised semantic segmentation. We take two challenging hyperspectral datasets of outdoor natural scenes as example, and randomly label dozens of points with semantic categories to conduct a point-supervised semantic segmentation benchmark. Then, a spectral and spatial fusion method is proposed to generate detailed pixel-level annotations, which are used to supervise the semantic segmentation models. With multiple experiments we find that hyperspectral information can be greatly helpful to point-supervised semantic segmentation as it is more distinctive than RGB. As a result, our proposed method with only point-supervision can achieve approximate performance of the fully-supervised method in many cases.1
UR - https://www.scopus.com/pages/publications/85198050913
U2 - 10.1109/CVPRW63382.2024.00143
DO - 10.1109/CVPRW63382.2024.00143
M3 - Conference contribution
AN - SCOPUS:85198050913
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 1357
EP - 1367
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
PB - IEEE Computer Society
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
Y2 - 16 June 2024 through 22 June 2024
ER -