Abstract
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering. Drawing inspiration from voxel-based representations with the level of detail (LoD), we introduce a multi-scale tri-plane-based scene representation that is capable of capturing the LoD of the signed distance function (SDF) and the space radiance. Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray and optimizes the LoD feature volume through differentiable rendering. Additionally, we propose an error-guided sampling strategy to guide the growth of the SDF during the optimization. Both qualitative and quantitative evaluations demonstrate that our method achieves superior surface reconstruction and photorealistic view synthesis compared to state-of-the-art approaches.
Overview
- We present a tri-plane position encoding, optimizing multiscale features, to effectively capture different levels of detail;
- We design a multi-convolved featurization within a conical frustum, to approximate cone sampling along a ray, which enables the anti-aliasing recovery with finer 3D geometric details;
- We develop a refinement strategy, involving error-guided sampling, to facilitate SDF growth for thin surfaces.
Method
Comparision
Novel View Synthesis
Citation
Acknowledgements
The website template was kindly provided by Michaƫl Gharbi.
Special thanks to Master Yayuan Wang for her support.