Contribution-Wise Byzantine-Robust Aggregation for Class-Balanced Federated Learning
36 Pages Posted: 25 Oct 2023
Abstract
Federated learning (FL) is a promising approach that allows many clients jointly train a model without sharing the raw data. Due to the clients’ different preferences, the class imbalance issue frequently occurs in real-world FL problems and poses threats for poisoning attacks to the existing FL methods. In this work, we first propose a new attack called Class Imbalance Attack that can degrade the testing accuracy of a particular class to even 0 under the state-of-the-art robust FL methods. To defend against such attacks, we further propose a Class-Balanced FL method. In the designed method, an honest score and a contribution score will be assigned to each client dynamically according to the server model. These two scores will be subsequently used for the calculation of the weight-average of the client gradients for each training iteration. Since the weight distribution takes into account both the “potential contribution” and “honesty” perspectives, our Class-Balanced FL ensures that the global model dynamically assimilates information from various honest clients and carried classes. The experiments are conducted on five different datasets against different state-of-the-art poisoning attacks, including the Class Imbalance Attack. The empirical results demonstrate the effectiveness of the proposed Class-Balanced FL method.
Keywords: Federated Learning (FL), Poisoning Attack, Byzantine-Robust Aggregation, Adversarial Machine Learning, Non-Independent Identical (Non-IID)
Suggested Citation: Suggested Citation