TY - GEN
T1 - A deep learning based distributed compressive video sensing reconstruction algorithm for small reconnaissance uav
AU - Zhen, Chen
AU - De-Rong, Chen
AU - Jiu-Lu, Gong
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11/27
Y1 - 2020/11/27
N2 - Distributed compressive video sensing (DCVS) is an effective method for small reconnaissance Unmanned Aerial Vehicle(UAV) to obtain high-quality videos on the battlefield. However, the existing reconstruction algorithms based on deep learning fail to make full use of the temporal correlation of videos, resulting in low reconstruction quality. In this paper, a measurement information compensation network called MCINet is used to compensate for the information in non-key frame measurements with the help of key frame measurements before initial recovery. At joint reconstruction stage, a neural network with autoencoder mix with recurrent neural network (RNN) structure called ECLDNet which makes full use of high-quality key frames is adopted, the encoder extracts temporal-spatial features from key and non-key frames, the RNN uses features of key frame to compensate for missing details in non-key frame features, the decoder reconstructs images in a symmetrical way with encoder. Experimental results indicate that our model can get an additional performance gain of more than 1.5 dB peak signal-noise ratio (PSNR) without any changes at the encoding end. The reconstruction runtime of our model increases slightly, but is still much less than iterative reconstruction algorithms due to the non-iterative nature of deep learning.
AB - Distributed compressive video sensing (DCVS) is an effective method for small reconnaissance Unmanned Aerial Vehicle(UAV) to obtain high-quality videos on the battlefield. However, the existing reconstruction algorithms based on deep learning fail to make full use of the temporal correlation of videos, resulting in low reconstruction quality. In this paper, a measurement information compensation network called MCINet is used to compensate for the information in non-key frame measurements with the help of key frame measurements before initial recovery. At joint reconstruction stage, a neural network with autoencoder mix with recurrent neural network (RNN) structure called ECLDNet which makes full use of high-quality key frames is adopted, the encoder extracts temporal-spatial features from key and non-key frames, the RNN uses features of key frame to compensate for missing details in non-key frame features, the decoder reconstructs images in a symmetrical way with encoder. Experimental results indicate that our model can get an additional performance gain of more than 1.5 dB peak signal-noise ratio (PSNR) without any changes at the encoding end. The reconstruction runtime of our model increases slightly, but is still much less than iterative reconstruction algorithms due to the non-iterative nature of deep learning.
KW - Deep learning
KW - Distributed compressive video sensing
KW - Small reconnaissance UAV
KW - Temporal correlation
UR - http://www.scopus.com/inward/record.url?scp=85098990336&partnerID=8YFLogxK
U2 - 10.1109/ICUS50048.2020.9274972
DO - 10.1109/ICUS50048.2020.9274972
M3 - Conference contribution
AN - SCOPUS:85098990336
T3 - Proceedings of 2020 3rd International Conference on Unmanned Systems, ICUS 2020
SP - 668
EP - 672
BT - Proceedings of 2020 3rd International Conference on Unmanned Systems, ICUS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd International Conference on Unmanned Systems, ICUS 2020
Y2 - 27 November 2020 through 28 November 2020
ER -