We have constructed a dataset of artist-designed, high-fidelity human heads,
and developed a novel framework to learn this parametric model using synthetic Synhead360 datasets.
Creating a 360° parametric model of a human head is a very challenging task.
While recent advancements have demonstrated the efficacy of leveraging synthetic data for building such parametric head models,
their performance remains inadequate in crucial areas such as expression-driven animation,
hairstyle editing, and text-based modifications.
In this paper, we build a dataset of artist-designed high-fidelity human heads
aand propose to create a novel parametric 360° renderable parametric head model from it.
Our scheme decouples the facial motion/shape and facial appearance,
which are represented by a classic parametric 3D mesh model and an attached neural texture, respectively.
We further propose a training method for decompositing hairstyle and facial appearance,
allowing free-swapping of the hairstyle.
A novel inversion fitting method is presented based on single image input with high generalization and fidelity.
o the best of our knowledge, our model is the first parametric 3D full-head that achieves 360° free-view synthesis,
image-based fitting, appearance editing, and animation within a single model.
Experiments show that facial motions and appearances are well disentangled in the parametric space,
leading to SOTA performance in rendering and animating quality.