MIDS Capstone Project Spring 2023

Fair Transformer GAN

Problem

As the use of A.I. and other machine learning models becomes more prevalent in our day-to-day lives, it becomes increasingly important to scrutinize the underlying datasets that these models are trained on so that we avoid perpetuating any potential biases.

To address this, we created a model that generates synthetic training data to mitigate various forms of bias in datasets. Our approach advances the state-of-the-art FairGAN+ by incorporating a transformer architecture, resulting in more accurate and fair data generation. These modifications have the potential to improve the quality and equity of synthetic data, which is crucial in various applications, such as data augmentation and privacy-preserving machine learning.

Data Science Approach

Fairness and accuracy are critical factors in generating high-quality synthetic data for various applications in machine learning. To address these challenges, we made modifications to the generator architecture of FairGAN+. By leveraging the transformer architecture, we were able to generate more accurate and fair synthetic data.

We integrated a transformer into the original generator architecture to capture the collinearity relationship between the features, outcome variable, and protected attribute. The attention mechanism is incorporated into the output of the FairGAN+ generator encoder to generate contextualized vectors. These vectors are then fed into the FairGAN+ generator decoder to produce synthetic data.

Evaluation

To evaluate fairness, we focus on the protected attribute, which are qualities or characteristics that by law, cannot be discriminated against (Ex: race, gender, etc.). We aim to minimize disparate treatment and disparate impact, meaning that the outcome shouldn’t be correlated with the protected attribute or its correlation with other features. We also aim to optimize for equality of odds, meaning a model should perform equally well for each class in a protected attribute. We compare fairness metrics of our model, Fair Transformer GAN, to both the original dataset and FairGAN+ generated data.

Key Takeaways

Fair Transformer GAN is generally fairer and is better at minimizing disparate impact. This means it’s harder for a new classifier trained on the generated data to predict the protected attribute. However, this comes at the cost of minor utility degradation (or not depending on the dataset) because the classifier cannot rely as much on the correlation between the protected attribute and other features in the dataset to predict outcomes.

In future work, we hope that Fair Transformer GAN will:

  • Handle any number of protected attribute classes (we currently support up to 5 classes)
  • Allow for multiple protected attributes or intersectionality (we currently support 1 protected attribute column only)
  • Generalize model given access to more socio-economic data (current architecture was trained across 3 open-source datasets)

Acknowledgments

We would like to acknowledge the influential work of Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu from the University of Arkansas, whose research on FairGAN+: Achieving Fair Data Generation and Classification through Generative Adversarial Nets greatly influenced our project.

 

More Information

Last updated:

April 20, 2023