TY - GEN
T1 - Neural Radiance Field with Composite Loss Function Supervision Mechanism
AU - Gong, Zhuohao
AU - Wang, Xurong
AU - Hu, Wenxin
AU - Wang, Qianqian
AU - Shangguan, Zixuan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Optimizing the underlying continuous volumetric scene function using sparse input view collections is crucial for applications in modern industrial production and virtual reality technologies. However, existing technologies in this domain continue to exhibit significant shortcomings in specific areas. Therefore, this paper proposes a method that leverages neural radiance fields as a scene representation, employing an efficient and robust backend penalty loss algorithm to supervise model convergence. This approach achieves high-quality 3D reconstruction from images captured from surrounding views, surpassing existing methods that rely on explicit volumetric representations. Additionally, CL-NeRF incorporates a straightforward tracking and mapping system that adjusts based on the underlying point cloud representation of the neural radiance field. This method is independent of scene size and avoids issues related to sub-map capacity, making it suitable for reconstructing larger scenes. CL-NeRF offers several advantages over previous models, including faster rendering and higher-quality optimization.
AB - Optimizing the underlying continuous volumetric scene function using sparse input view collections is crucial for applications in modern industrial production and virtual reality technologies. However, existing technologies in this domain continue to exhibit significant shortcomings in specific areas. Therefore, this paper proposes a method that leverages neural radiance fields as a scene representation, employing an efficient and robust backend penalty loss algorithm to supervise model convergence. This approach achieves high-quality 3D reconstruction from images captured from surrounding views, surpassing existing methods that rely on explicit volumetric representations. Additionally, CL-NeRF incorporates a straightforward tracking and mapping system that adjusts based on the underlying point cloud representation of the neural radiance field. This method is independent of scene size and avoids issues related to sub-map capacity, making it suitable for reconstructing larger scenes. CL-NeRF offers several advantages over previous models, including faster rendering and higher-quality optimization.
KW - Nerf
KW - scene representation
KW - view synthesis
UR - http://www.scopus.com/inward/record.url?scp=85216538246&partnerID=8YFLogxK
U2 - 10.1109/SmartIoT62235.2024.00055
DO - 10.1109/SmartIoT62235.2024.00055
M3 - Conference contribution
AN - SCOPUS:85216538246
T3 - Proceedings - 2024 IEEE International Conference on Smart Internet of Things, SmartIoT 2024
SP - 317
EP - 324
BT - Proceedings - 2024 IEEE International Conference on Smart Internet of Things, SmartIoT 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th IEEE International Conference on Smart Internet of Things, SmartIoT 2024
Y2 - 14 November 2024 through 16 November 2024
ER -