Neural Radiance Fields 3d Reconstruction Based on Ray-Guidance and Surface Optimization
26 Pages Posted: 22 Dec 2024
Abstract
Neural Radiance Fields (NeRF) have emerged as a powerful implicit scene representation method, demonstrating remarkable performance across a wide range of domains. NeRF can generate photorealistic rendering and synthesize new views. However, its rendering process requires sampling a large num ber of spatial points, of which only a small portion significantly contributes to rendering quality. NeRF’s overall efficiency is decreased since processing these non-contributing points requires a significant amount of computation, which lengthens the training period. In this work, we suggest a novel ap proach to 3D reconstruction for NeRF that combines surface optimization with ray guidance. To reduce the amount of sampled points, we first use sparse voxels to guide the rays during rendering. This enables the model to bypass sampling points that do not intersect with objects. Secondly, we use the depth information to limit the sample sites inside the object, improving the quality of the reconstruction. The proposed strategy consid erably increases training speed while preserving similar performance to the old method, as confirmed by the experimental findings. Furthermore, the addition of depth constraints improves rendering quality by lowering recon struction artifacts such ghosting and blurring.
Keywords: Neural radiance fields, 3D reconstruction, volume rendering, depth optimization.
Suggested Citation: Suggested Citation