TY - JOUR
T1 - Efficient ray sampling for radiance fields reconstruction
AU - Sun, Shilei
AU - Liu, Ming
AU - Fan, Zhongyi
AU - Jiao, Qingliang
AU - Liu, Yuxue
AU - Dong, Liquan
AU - Kong, Lingqin
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2024/2
Y1 - 2024/2
N2 - Accelerating the training process of neural radiance field holds substantial practical value. The ray sampling strategy profoundly influences the convergence of this neural network. Therefore, more efficient ray sampling can directly augment the training efficiency of existing NeRF models. We propose a novel ray sampling approach for neural radiance field that improves training efficiency while retaining photorealistic rendering results. First, we analyze the relationship between the pixel loss distribution of sampled rays and rendering quality. This reveals redundancy in the original NeRF's uniform ray sampling. Guided by this finding, we develop a sampling method leveraging pixel regions and depth boundaries. Our main idea is to sample fewer rays in training views, yet with each ray more informative for scene fitting. Sampling probability increases in pixel areas exhibiting significant color and depth variation, greatly reducing wasteful rays from other regions without sacrificing precision. Through this method, not only can the convergence of the network be accelerated, but the spatial geometry of a scene can also be perceived more accurately. Rendering outputs are enhanced, especially for texture-complex regions. Experiments demonstrate that our method significantly outperforms state-of-the-art techniques on public benchmark datasets.
AB - Accelerating the training process of neural radiance field holds substantial practical value. The ray sampling strategy profoundly influences the convergence of this neural network. Therefore, more efficient ray sampling can directly augment the training efficiency of existing NeRF models. We propose a novel ray sampling approach for neural radiance field that improves training efficiency while retaining photorealistic rendering results. First, we analyze the relationship between the pixel loss distribution of sampled rays and rendering quality. This reveals redundancy in the original NeRF's uniform ray sampling. Guided by this finding, we develop a sampling method leveraging pixel regions and depth boundaries. Our main idea is to sample fewer rays in training views, yet with each ray more informative for scene fitting. Sampling probability increases in pixel areas exhibiting significant color and depth variation, greatly reducing wasteful rays from other regions without sacrificing precision. Through this method, not only can the convergence of the network be accelerated, but the spatial geometry of a scene can also be perceived more accurately. Rendering outputs are enhanced, especially for texture-complex regions. Experiments demonstrate that our method significantly outperforms state-of-the-art techniques on public benchmark datasets.
KW - Efficient ray sampling
KW - Neural radiance field
KW - Training efficiency
UR - http://www.scopus.com/inward/record.url?scp=85181726739&partnerID=8YFLogxK
U2 - 10.1016/j.cag.2023.11.005
DO - 10.1016/j.cag.2023.11.005
M3 - Article
AN - SCOPUS:85181726739
SN - 0097-8493
VL - 118
SP - 48
EP - 59
JO - Computers and Graphics (Pergamon)
JF - Computers and Graphics (Pergamon)
ER -