OxonFair: A Flexible Toolkit for Algorithmic Fairness
37 Pages Posted: 18 Jul 2024 Last revised: 5 Nov 2024
Date Written: June 22, 2024
Abstract
We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair.
Keywords: Fairness Toolkit, Algorithmic Fairness, Bias, Machine Learning, Trustworthy AI
Suggested Citation: Suggested Citation
Delaney, Eoin and Fu, Ziaho and Wachter, Sandra and Mittelstadt, Brent and Russell, Chris, OxonFair: A Flexible Toolkit for Algorithmic Fairness (June 22, 2024). Available at SSRN: https://ssrn.com/abstract=4894794 or http://dx.doi.org/10.2139/ssrn.4894794
Do you have a job opening that you would like to promote on SSRN?
Feedback
Feedback to SSRN