TY - GEN
T1 - EDeRF
T2 - 17th Asian Conference on Computer Vision, ACCV 2024
AU - Liang, Zhaoxiang
AU - Guo, Wenjun
AU - Yang, Yi
AU - Liu, Tong
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - NeRF provides high reconstruction accuracy but is slow for dynamic scenes. Editable NeRF speeds up dynamics by editing static scenes, reducing retraining and succeeding in autonomous driving simulation. However, the lack of depth cameras and the difficulty in obtaining precise vehicle poses make real-time dynamic road scene reconstruction challenging, particularly in swiftly and accurately reconstructing new vehicles entering the scene and their trajectories. We propose EDeRF, a method for real-time dynamic road scene reconstruction from fixed cameras such as traffic surveillance through collaboration of sub-NeRFs and cross-field editing. We decompose the scene space and select key areas to update new vehicles by sharing parameters and local training with sub-fields. These vehicles are then integrated into the complete scene and achieve dynamic motion by warping the sampling rays across different fields, where vehicles’ six degrees of freedom(6-DOF) is estimated based on inter-frame displacement and rigid body contact constraints. We have conducted physical experiments simulating traffic monitoring scenes. Results show that EDeRF outperforms comparative methods in efficiency and accuracy in reconstructing the appearance and movement of newly entered vehicles.
AB - NeRF provides high reconstruction accuracy but is slow for dynamic scenes. Editable NeRF speeds up dynamics by editing static scenes, reducing retraining and succeeding in autonomous driving simulation. However, the lack of depth cameras and the difficulty in obtaining precise vehicle poses make real-time dynamic road scene reconstruction challenging, particularly in swiftly and accurately reconstructing new vehicles entering the scene and their trajectories. We propose EDeRF, a method for real-time dynamic road scene reconstruction from fixed cameras such as traffic surveillance through collaboration of sub-NeRFs and cross-field editing. We decompose the scene space and select key areas to update new vehicles by sharing parameters and local training with sub-fields. These vehicles are then integrated into the complete scene and achieve dynamic motion by warping the sampling rays across different fields, where vehicles’ six degrees of freedom(6-DOF) is estimated based on inter-frame displacement and rigid body contact constraints. We have conducted physical experiments simulating traffic monitoring scenes. Results show that EDeRF outperforms comparative methods in efficiency and accuracy in reconstructing the appearance and movement of newly entered vehicles.
KW - Editable Radiance Fields
KW - Intelligent Traffic Monitoring
KW - Real-time 3D Reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85213348030&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-0972-7_4
DO - 10.1007/978-981-96-0972-7_4
M3 - Conference contribution
AN - SCOPUS:85213348030
SN - 9789819609710
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 56
EP - 73
BT - Computer Vision – ACCV 2024 - 17th Asian Conference on Computer Vision, Proceedings
A2 - Cho, Minsu
A2 - Laptev, Ivan
A2 - Tran, Du
A2 - Yao, Angela
A2 - Zha, Hongbin
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 December 2024 through 12 December 2024
ER -