Text and Image Guided 3D Avatar Generation and Manipulation

Boğaziçi University
original
Original
child
Child
old
Old
happy
Happy
surprised
Surprised
beyonce
Beyonce

Abstract

The manipulation of latent space has recently become an interesting topic in the field of generative models. Recent research shows that latent directions can be used to manipulate images towards certain attributes. However, controlling the generation process of 3D generative models remains a challenge. In this work, we propose a novel 3D manipulation method that can manipulate both the shape and texture of the model using text or image-based prompts such as 'a young face' or 'a surprised face'. We leverage the power of Contrastive Language-Image Pre-training (CLIP) model and a pre-trained 3D GAN model designed to generate face avatars, and create a fully differentiable rendering pipeline to manipulate meshes. More specifically, our method takes an input latent code and modifies it such that the target attribute specified by a text or image prompt is present or enhanced, while leaving other attributes largely unaffected. Our method requires only 5 minutes per manipulation, and we demonstrate the effectiveness of our approach with extensive results and comparisons.


Our Framework

An overview of our framework (using the text prompt ‘happy human’ as an example). The manipulation direction ∆ccorre- sponding to the text prompt is optimized by minimizing the CLIP-based loss LCLIP, identity loss LID , and the L2 loss LL2.

  • Fast Manipulation: We propose a fast text-guided image generation and manipulation method that performs complex shape and texture manipulations in 5 minutes.
  • Differentiable Rendering: We create a fully differentiable pipeline to render images of generated 3D shapes and compute a CLIP loss between the embedding of the rendered image and the input text/image. We augment the rendered images with multiple views to enforce view consistency.
  • Prompt Engineering: We perform prompt engineering to augment the input text that describes the target manipulation. We augment the original text prompt t by embedding it in sentence templates such as ’a rendering of a . . . ’ or ’a face of a . . . ’.

A Variety of Manipulations

Manipulation results of our method with various inputs and text prompts on two different 3D faces: ’Old’, ’Child’, ’Big Eyes’, ’Thick Eyebrows’, ’Makeup’. The leftmost column shows the original outputs, the adjacent columns show the manipulation results, the target text prompt is above each column.
Manipulation results of our method with various inputs and text prompts: ‘Asian’, ‘Indian’, ‘Woman’, ‘Man’. The top row shows the original outputs, the bottom row shows the manipulation results, target text prompt is above each column.

Image-Guided Manipulation

Image-based manipulations with the target image on the left and manipulations with different strengths for identity loss on the right.

Comparisons


Expression Manipulation Comparison with TBGAN

The results of our method with text prompts: ’Happy’, ’Sad’, ’Surprised’, ’Angry’, ’Afraid’, and ’Disgusted’. The top row shows the original outputs, the second row shows the TBGAN conditioned expressions, the third row shows the manipulation results, the target text prompt is above each column.


Comparison with PCA Baseline

Comparison between PCA-based manipulations andtext-driven manipulations using our method. The top row shows the PCA-based results, the bottom row shows the results with our method.


BibTeX

@misc{canfes2022latent3d,
    title={Text and Image Guided 3D Avatar Generation and Manipulation},
    author={Canfes, Zehranaz and Atasoy, M Furkan and Dirik, Alara and Yanardag, Pinar},
    year={2022}
}

Acknowledgments

This publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No: 118c321). We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN RTX GPU and GCP research credits from Google.