Abstract

We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering. Drawing inspiration from voxel-based representations with the level of detail (LoD), we introduce a multi-scale tri-plane-based scene representation that is capable of capturing the LoD of the signed distance function (SDF) and the space radiance. Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray and optimizes the LoD feature volume through differentiable rendering. Additionally, we propose an error-guided sampling strategy to guide the growth of the SDF during the optimization. Both qualitative and quantitative evaluations demonstrate that our method achieves superior surface reconstruction and photorealistic view synthesis compared to state-of-the-art approaches.

overview

Overview


Our method reveals the importance of addressing aliasing for achieving high-fidelity reconstruction:
  • We present a tri-plane position encoding, optimizing multiscale features, to effectively capture different levels of detail;
  • We design a multi-convolved featurization within a conical frustum, to approximate cone sampling along a ray, which enables the anti-aliasing recovery with finer 3D geometric details;
  • We develop a refinement strategy, involving error-guided sampling, to facilitate SDF growth for thin surfaces.
In experiments, our method outperforms state-of-the-art NeuS based approaches at high-quality surface reconstruction and view synthesis, particularly for objects and scenes with high-frequency details and thin surfaces. Please refer to our paper for more implementation details.

Method

Aggregation of LoD feature, including multi-convolved featurization and cone discrete sampling. We obtain the feature of any sample within the conical frustum by blending the features of vertices. Additionally, considering the size of the sampled points, we introduce multi-convolved features by Gaussian Kernel to efficiently represent ray sampling within a cone. Combining both of them, we aggregate the LoD feature of any sample in a continuous manner.
method

Comparision

We compare the visual quality of our reconstructed mesh with the state-of-the-art baseline, demonstrating that our approach can reconstruct superior details.
NeuS
HF-NeuS
Ours

Novel View Synthesis

We showcase a comparison of our novel view synthesis with the ground truth.
Ours
Normal
Ground Truth
method method method method method

Citation

Acknowledgements

The website template was kindly provided by Michaƫl Gharbi.
Special thanks to Master Yayuan Wang for her support.