Personalized Federated Learning with Domain Generalization
36 Pages Posted: 25 Oct 2023
Abstract
Personalized Federated Learning (pFL) allows for customized models to be developed for personalized information from multiple distributed domains. In real-world scenarios, some testing data may originate from new target domains (unseen domains) outside of the federated network, resulting in another learning task called Federated Domain Generalization (FedDG). In this paper, we aim to tackle the new problem, named {\bf Personalized Federated Domain Generalization (pFedDG)}, which {\it not only protects the personalization but also obtains a general model for unseen target domains}. We observe that pFL and FedDG objectives can conflict, which poses challenges in addressing both tasks simultaneously. To sufficiently moderate the conflict, we first develop a unified framework that decouples the two tasks with two loss functions simultaneously and use an integrated predictor to serve both two learning tasks. Then, we design a new method, named {\bf Personalized Federated Decoupled Representation (pFedDR)}, to approach our framework. Specifically, we use batch normalization layers linked to the two tasks to learn decoupled representations and design a pFedDG relative entropy loss to encourage a generalized (conflict-optimized) representation for the DG task. Extensive experiments show that our pFedDR method achieves state-of-the-art performance for both tasks while incurring almost no increase in communication cost. Code is available at https://github.com/CSU-YL/pFedDR.
Keywords: Federated learning, Personalization, Domain Generalization
Suggested Citation: Suggested Citation