TY - GEN
T1 - NeRFE
T2 - International Conference on Guidance, Navigation and Control, ICGNC 2022
AU - Zhang, Bo
AU - Han, Yuqi
AU - Suo, Jinli
AU - Dai, Qionghai
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2023
Y1 - 2023
N2 - Full view perception of surroundings and free view synthesis are of great importance for navigation. Although multiple conventional cameras can provide multi-view stereos of the external environment, limited sensitivity and dynamic range prohibits their applications in scenarios with extreme lighting conditions, such as at night time or in a tunnel. As a bio-inspired device, event camera intrinsically enjoys advantage of ultra-fast response speed, low latency and high dynamic range, but the free view synthesis is nontrivial for the sparse and noisy events. To address this issue, we present a framework for synthesizing novel views freely from event sequences. Specifically, we introduce a deep network representing the neural radiance field of the scene’s event signals for 3D structure encoding. To leverage the sparsity of event data, we introduce an edge based loss function into the optimization process. The learned deep neural network can render novel views which provide additional structures and details to the raw data in original views. We envision that these newly retrieved information can be exploited for further downstream tasks like object detection, tracking and mapping.
AB - Full view perception of surroundings and free view synthesis are of great importance for navigation. Although multiple conventional cameras can provide multi-view stereos of the external environment, limited sensitivity and dynamic range prohibits their applications in scenarios with extreme lighting conditions, such as at night time or in a tunnel. As a bio-inspired device, event camera intrinsically enjoys advantage of ultra-fast response speed, low latency and high dynamic range, but the free view synthesis is nontrivial for the sparse and noisy events. To address this issue, we present a framework for synthesizing novel views freely from event sequences. Specifically, we introduce a deep network representing the neural radiance field of the scene’s event signals for 3D structure encoding. To leverage the sparsity of event data, we introduce an edge based loss function into the optimization process. The learned deep neural network can render novel views which provide additional structures and details to the raw data in original views. We envision that these newly retrieved information can be exploited for further downstream tasks like object detection, tracking and mapping.
KW - Event camera
KW - Navigation
KW - Neural radiance field
UR - http://www.scopus.com/inward/record.url?scp=85151162222&partnerID=8YFLogxK
U2 - 10.1007/978-981-19-6613-2_653
DO - 10.1007/978-981-19-6613-2_653
M3 - Conference contribution
AN - SCOPUS:85151162222
SN - 9789811966125
T3 - Lecture Notes in Electrical Engineering
SP - 6776
EP - 6784
BT - Advances in Guidance, Navigation and Control - Proceedings of 2022 International Conference on Guidance, Navigation and Control
A2 - Yan, Liang
A2 - Duan, Haibin
A2 - Deng, Yimin
A2 - Yan, Liang
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 5 August 2022 through 7 August 2022
ER -