|
|
|
|
|
|
|
|
The results here show the rendered 3DMM with the 3DMM parameters predicted by our networks for single face and multi-face images. During retargeting, we only transfer the expression and rotation parameters to the 3D characters as shown in the teaser image at the top of this page. In the paper, we have demonstrated the ability of our approach to disentangle these parameters from the other parameters, resulting in accurate retargeted facial motion on 3D characters.
AcknowledgementsWe would like to thank Pai Zhang, Muscle Wu, Xiang Yan, Zeyu Chen and other members of the Visual Intelligence Group at Microsoft Research AI for their help with the project. We would also like to thank Linda Shapiro, Alex Colburn and Barbara Mones from UW Graphics and Imaging Laboratory for their valuable discussions. Thanks to https://github.com/experiencor/keras-yolo2 for sharing their code and to https://richzhang.github.io/colorization/ for this webpage template. |