TY - GEN
T1 - Automatic foreground seeds discovery for robust video saliency detection
AU - Zhang, Lin
AU - Lu, Yao
AU - Zhou, Tianfei
N1 - Publisher Copyright:
© Springer International Publishing AG, part of Springer Nature 2018.
PY - 2018
Y1 - 2018
N2 - In this paper, we propose a novel algorithm for saliency object detection in unconstrained videos. Even though various methods have been proposed to solve this task, video saliency detection is still challenging due to the complication in object discovery as well as the utilization of motion cues. Most of existing methods adopt background prior to detect salient objects. However, they are prone to fail in the case that foreground objects are similar with the background. In this work, we aim to discover robust foreground priors as a complement to background priors so that we can improve the performance. Given an input video, we consider motion and appearance cues separately to generate initial foreground/background seeds. Then, we learn a global object appearance model using the initial seeds and remove unreliable seeds according to foreground likelihood. Finally, the seeds work as queries to rank all the superpixels in images to generate saliency maps. Experimental results on challenging public dataset demonstrate the advantage of our algorithm over state-of-the-art algorithms.
AB - In this paper, we propose a novel algorithm for saliency object detection in unconstrained videos. Even though various methods have been proposed to solve this task, video saliency detection is still challenging due to the complication in object discovery as well as the utilization of motion cues. Most of existing methods adopt background prior to detect salient objects. However, they are prone to fail in the case that foreground objects are similar with the background. In this work, we aim to discover robust foreground priors as a complement to background priors so that we can improve the performance. Given an input video, we consider motion and appearance cues separately to generate initial foreground/background seeds. Then, we learn a global object appearance model using the initial seeds and remove unreliable seeds according to foreground likelihood. Finally, the seeds work as queries to rank all the superpixels in images to generate saliency maps. Experimental results on challenging public dataset demonstrate the advantage of our algorithm over state-of-the-art algorithms.
KW - Appearance model
KW - Foreground seeds discovery
KW - Graph ranking
KW - Video saliency
UR - http://www.scopus.com/inward/record.url?scp=85047492451&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-77383-4_9
DO - 10.1007/978-3-319-77383-4_9
M3 - Conference contribution
AN - SCOPUS:85047492451
SN - 9783319773827
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 89
EP - 97
BT - Advances in Multimedia Information Processing – PCM 2017 - 18th Pacific-Rim Conference on Multimedia, Revised Selected Papers
A2 - Zeng, Bing
A2 - Li, Hongliang
A2 - Huang, Qingming
A2 - El Saddik, Abdulmotaleb
A2 - Jiang, Shuqiang
A2 - Fan, Xiaopeng
PB - Springer Verlag
T2 - 18th Pacific-Rim Conference on Multimedia, PCM 2017
Y2 - 28 September 2017 through 29 September 2017
ER -