The manipulation of latent space has recently become an interesting topic in the field of generative models. Recent research shows that latent directions can be used to manipulate images towards certain attributes. However, controlling the generation process of 3D generative models remains a challenge. In this work, we propose a novel 3D manipulation method that can manipulate both the shape and texture of the model using text or image-based prompts such as 'a young face' or 'a surprised face'. We leverage the power of Contrastive Language-Image Pre-training (CLIP) model and a pre-trained 3D GAN model designed to generate face avatars, and create a fully differentiable rendering pipeline to manipulate meshes. More specifically, our method takes an input latent code and modifies it such that the target attribute specified by a text or image prompt is present or enhanced, while leaving other attributes largely unaffected. Our method requires only 5 minutes per manipulation, and we demonstrate the effectiveness of our approach with extensive results and comparisons.
An overview of our framework (using the text prompt ‘happy human’ as an example). The manipulation direction ∆ccorre- sponding to the text prompt is optimized by minimizing the CLIP-based loss LCLIP, identity loss LID , and the L2 loss LL2.
The results of our method with text prompts: ’Happy’, ’Sad’, ’Surprised’, ’Angry’, ’Afraid’, and ’Disgusted’. The top row shows the original outputs, the second row shows the TBGAN conditioned expressions, the third row shows the manipulation results, the target text prompt is above each column.
Comparison between PCA-based manipulations andtext-driven manipulations using our method. The top row shows the PCA-based results, the bottom row shows the results with our method.
@misc{canfes2022latent3d,
title={Text and Image Guided 3D Avatar Generation and Manipulation},
author={Canfes, Zehranaz and Atasoy, M Furkan and Dirik, Alara and Yanardag, Pinar},
year={2022}
}
This publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No: 118c321). We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN RTX GPU and GCP research credits from Google.