FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations

Cemre Karakas, Alara Dirik, Eylul Yalcinkaya, Pinar Yanardag
Boğaziçi University

Abstract

Recent advances in generative adversarial networks have shown that it is possible to generate high-resolution and hyperrealistic images. However, the images produced by GANs are only as fair and representative as the datasets on which they are trained. In this paper, we propose a method for directly modifying a pre-trained StyleGAN2 model that can be used to generate a balanced set of images with respect to one (e.g., eyeglasses) or more attributes (e.g., gender and eyeglasses). Our method takes advantage of the style space of the StyleGAN2 model to perform disentangled control of the target attributes to be debiased. Our method does not require training additional models and directly debiases the GAN model, paving the way for its use in various downstream applications. Our experiments show that our method successfully debiases the GAN model within a few minutes without compromising the quality of the generated images. To promote fair generative models, we share the code and debiased models at https://github.com/catlab-team/fairstyle.


FairStyle Framework

An overview of the FairStyle architecture, z denotes a random vector drawn from a Gaussian distribution, w denotes the latent vector generated by the mapping network of StyleGAN2. Given a target attribute at, si,j represents the style channel with layer index i and channel index j controlling the target attribute. We introduce fairstyle bias tensors into the GAN model, in which we edit the corresponding style channel si,jfor debiasing. The edited vectors are then fed into the generator to get a new batch of images from which we obtain updated classifier results for at. The fairstyle bias tensors are iteratively edited until the GAN model produces a balanced distribution with respect to the target attribute. The de-biased GAN model can then be used for sampling purposes or directly used as a generative backbone model in downstream applications.

Our main contributions are as follows:
  • We first propose a simple method that debiases the GAN model with respect to a single attribute, such as gender or eyeglasses.
  • We then extend our method for jointly debiasing multiple attributes such as gender or eyeglasses.
  • To handle more complex attributes such as race, we propose a third method based on CLIP, where we debias StyleGAN2 with text-based prompts such as ’a black person’ or ’an asian person’.
  • • We perform extensive comparisons between our proposed method and other approaches to enforce fairness for a variety of attributes. We empirically show that our method is very effective in de-biasing the GAN model to produce balanced datasets without compromising the quality of the generated images.
  • To promote fair generative models and encourage further research on this topic, we provide our source code and debiased StyleGAN2 models for various attributes at http://github.com/catlab-team/fairstyle.

Results

Distribution of single and joint attributes before and after debiasing StyleGAN2 model with our methods.

Qualitative results for fair image generation in GANs with Gender+Eyeglasses and Black+Eyeglasses attributes


BibTeX

@article{karakas2022fairstyle,
    title={FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations},
    author={Cemre Karakas and Alara Dirik and Eylul Yalcınkaya and Pinar Yanardag},
    journal={ArXiv},
    year={2022},
    volume={abs/2202.06240}
}
}

Acknowledgments

This publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No: 118c321). We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN RTX GPU and GCP research credits from Google.