Head360: Learning a Parametric 3D
Full-Head for Free-View Synthesis in 360°

Yuxiao He1, Yiyu Zhuang1, Yanwen Wang1, Yao Yao1, Siyu Zhu2, Xiaoyu Li3, Qi Zhang3, Xun Cao1, Hao Zhu+ 1

1 State Key Laboratory for Novel Software Technology, Nanjing University, China
2 Fudan University, Shanghai, China
3 Tencent AI Lab, Shenzhen, China

Supplementary video

Abstract

We have constructed a dataset of artist-designed, high-fidelity human heads, and developed a novel framework to learn this parametric model using synthetic Synhead360 datasets.

Creating a 360° parametric model of a human head is a very challenging task. While recent advancements have demonstrated the efficacy of leveraging synthetic data for building such parametric head models, their performance remains inadequate in crucial areas such as expression-driven animation, hairstyle editing, and text-based modifications.

In this paper, we build a dataset of artist-designed high-fidelity human heads aand propose to create a novel parametric 360° renderable parametric head model from it. Our scheme decouples the facial motion/shape and facial appearance, which are represented by a classic parametric 3D mesh model and an attached neural texture, respectively.

We further propose a training method for decompositing hairstyle and facial appearance, allowing free-swapping of the hairstyle. A novel inversion fitting method is presented based on single image input with high generalization and fidelity.

o the best of our knowledge, our model is the first parametric 3D full-head that achieves 360° free-view synthesis, image-based fitting, appearance editing, and animation within a single model. Experiments show that facial motions and appearances are well disentangled in the parametric space, leading to SOTA performance in rendering and animating quality.

Synhead360 Datasets

We create a high-quality artist-designed 3D head dataset, containing 100 different subjects with various hairstyles. The ratio of males to females in the dataset is 1:1, and the age is fairly evenly distributed between 16 and 70.

The Synhead360 dataset encompasses 374, 400 calibrated high-resolution images and 5, 200 mesh models for each identity under 52 expressions. The 3D heads are rendered by 72 head-centric virtual cameras covering 3 pitch angles and 24 horizontal rotation angles.

Data Request

For downloading the dataset, please complete the License Agreement and send it to nju3dv@nju.edu.cn. Once licensed, the download link will be sent. The email subject should be [Synhead100 Dataset Request]. This dataset is for non-commercial research use only, and requests from commercial companies will not be licensed.

Method

Our model is represented by a neural radiance field with hexlanes, conditioned on a generative neural texture and a parametric 3D mesh model. In this way, the facial appearance, shape, and motion are parameterized as texture code t, shape code s, and blendshapes parameter b, respectively. The RefineNet, a conditional GAN, is introduced to further improve the details of the generated faces.

-->

BibTeX

@inproceedings{he2024head360,
  title={Head360: Learning a Parametric 3D Full-Head for Free-View Synthesis in 360 degrees},
  author={He, Yuxiao and Zhuang, Yiyu and Wang, Yanwen and Yao, Yao and Zhu, Siyu and Li, Xiaoyu and Zhang, Qi and Cao, Xun and Zhu, Hao},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2024},
}