Skip to content

chingyaoc/fair-mixup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Fair Mixup: Fairness via Interpolation

Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predictions between the groups. Nevertheless, even though the constraints are satisfied during training, they might not generalize at evaluation time. To improve the generalizability of fair classifiers, we propose fair mixup, a new data augmentation strategy for imposing the fairness constraint. In particular, we show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups. We use mixup, a powerful data augmentation strategy to generate these interpolates. We analyze fair mixup and empirically show that it ensures a better generalization for both accuracy and fairness measurement in tabular, vision, and language benchmarks.

Fair Mixup: Fairness via Interpolation ICLR 2021 [paper]
Ching-Yao Chuang, and Youssef Mroueh

Prerequisites

  • Python 3.7
  • PyTorch 1.3.1
  • aif360
  • sklearn

Implementation

The code for Adult and CelebA experiments can be found in the correspoding folders.

Citation

If you find this repo useful for your research, please consider citing the paper

@inproceedings{
chuang2021fair,
title={Fair Mixup: Fairness via Interpolation},
author={Ching-Yao Chuang and Youssef Mroueh},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=DNl5s5BXeBn}
}

For any questions, please contact Ching-Yao Chuang (cychuang@mit.edu).

About

ICLR 2021, Fair Mixup: Fairness via Interpolation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages