Fedpdm: Representation Enhanced Federated Learning with Privacy Preserving Diffusion Models
38 Pages Posted: 24 May 2025
Abstract
Most existing semi-parameter-sharing federated learning (FL) frameworks utilize generative models to achieve partial parameter sharing with the server, which effectively enhances the data privacy security of each client. However, these generative models often suffer from model utility degradation due to poor representation robustness. Meanwhile, representation inconsistency between local and global models exacerbates the client drift problem under non-IID scenarios. Furthermore, existing semi-parameter-sharing FL frameworks overlook representation leakage risks associated with generator sharing, while failing to balance privacy and utility. To alleviate these challenges, we propose FedPDM, a semi-parameter-sharing FL framework built upon a privacy-preserving diffusion model (PDM). Specifically, our proposed PDM enables model alignment with features from the privacy extractor without requiring direct exposure of this extractor, effectively mitigating utility degradation caused by poor representation robustness. Moreover, a feature-level penalty term is introduced into the optimization objective of PDM to avoid representation leakage. We further design a two-stage aggregation strategy that addresses representation inconsistency through initialization correction with Gaussian constraint for knowledge distillation. Finally, we provide the first theoretical convergence analysis for semi-parameter-sharing FL, demonstrating that our framework converges at a rate of O(1/T). Extensive experiments on four datasets show that FedPDM achieves average accuracy improvements of 1.78% to 5.56% compared with various state-of-the-art baselines.
Keywords: federated learning, diffusion model, Privacy protection, split learning
Suggested Citation: Suggested Citation