Portable Fair Decision Making Through Modular Approach
16 Pages Posted: 4 Dec 2024
Abstract
Influenced by real-world biases, machine learning based decision systems are prone to discriminatory outcomes, which has garnered considerable attention, and led to the proposal of numerous bias mitigation methods, such as adversarial debiasing and fair representation learning. However, relying on predefined fairness standards prevents these methods from adapting to evolving fairness requirements. To address this issue, we propose a modular fairness enhancement approach. In our approach, each fairness requirement is modeled as a distinct optimization task, with the corresponding model parameters encapsulated within an independent sub-module. In addition, a main module is designed to capture the shared classification features across all fairness tasks. This multi-task learning architecture enables the decision making system to meet multiple fairness requirements without retraining. Experimental evaluations on three real datasets, by comparing portability and four fairness metrics with state-of-the-art methods. The results demonstrate that our method achieves superior portability in addressing various fairness requirements compared to the existing methods.
Keywords: Artificial intelligence ethics, Fairness, Portability, Machine learning, Multi-task learning
Suggested Citation: Suggested Citation