Lw-Fedssl: Resource-Efficient Layer-Wise Federated Self-Supervised Learning

14 Pages Posted: 7 Sep 2024

See all articles by Ye Lin Tun

Ye Lin Tun

Kyung Hee University - Department of Computer Science and Engineering

Chu Myaet Thwal

Kyung Hee University - Department of Computer Science and Engineering

Huy Le

Kyung Hee University

Minh N. H. Nguyen

Kyung Hee University - Department of Computer Science and Engineering

Choong Seon Hong

Kyung Hee University - Department of Computer Science and Engineering

Multiple version iconThere are 3 versions of this paper

Abstract

Many studies integrate federated learning (FL) with self-supervised learning (SSL) to take advantage of raw data distributed across edge devices. However, edge devices often struggle with high computation and communication costs imposed by SSL and FL algorithms. To tackle this hindrance, we propose LW-FedSSL, a layer-wise federated self-supervised learning approach that allows edge devices to incrementally train a single layer of the model at a time. We introduce server-side calibration and representation alignment mechanisms to ensure LW-FedSSL delivers performance on par with conventional federated self-supervised learning (FedSSL) while significantly lowering resource demands. In a pure layer-wise training scheme, training one layer at a time may limit effective interaction between different layers of the model. The server-side calibration mechanism takes advantage of the resource-rich FL server to ensure smooth collaboration between different layers of the global model. During local training, the representation alignment mechanism encourages closeness between representations of local models and those of the global model, thereby preserving the layer cohesion established by server-side calibration. With the proposed mechanisms, LW-FedSSL achieves a 3.3× reduction in memory usage, 2.1× fewer computational operations (FLOPs), and a 3.2× lower communication cost while maintaining the same level of performance as its end-to-end training counterpart. Additionally, we explore a progressive training strategy called Prog-FedSSL, which matches end-to-end training in memory requirements but offers a 1.8× reduction in FLOPs and communication costs. Although Prog-FedSSL is not as resource-efficient as LW-FedSSL, its performance improvements make it a suitable candidate for FL environments with more lenient resource constraints.

Keywords: federated learning, self-supervised learning, layer-wise training, resource-efficient

Suggested Citation

Tun, Ye Lin and Thwal, Chu Myaet and Le, Huy and Nguyen, Minh N. H. and Hong, Choong Seon, Lw-Fedssl: Resource-Efficient Layer-Wise Federated Self-Supervised Learning. Available at SSRN: https://ssrn.com/abstract=4949862 or http://dx.doi.org/10.2139/ssrn.4949862

Ye Lin Tun (Contact Author)

Kyung Hee University - Department of Computer Science and Engineering ( email )

Chu Myaet Thwal

Kyung Hee University - Department of Computer Science and Engineering ( email )

Huy Le

Kyung Hee University ( email )

Minh N. H. Nguyen

Kyung Hee University - Department of Computer Science and Engineering ( email )

Choong Seon Hong

Kyung Hee University - Department of Computer Science and Engineering ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
21
Abstract Views
173
PlumX Metrics