Home / Papers / FedIR: Learning Invariant Representations from Heterogeneous Data in Federated Learning

FedIR: Learning Invariant Representations from Heterogeneous Data in Federated Learning

88 Citations2023
Xi Zheng, Hongcheng Xie, Yu Guo
2023 19th International Conference on Mobility, Sensing and Networking (MSN)

FedIR introduces the idea of learning invariant features in domain adaptation so that the aggregated global model can handle the data heterogeneity well and refers to the Optimal Path Search to assist model training in obtaining better invariant representations.

Abstract

Federated learning has recently emerged as a popular learning paradigm that enables multiple clients to jointly train a high-quality model without sharing their local training datasets. Each client will train its local model by its local dataset, and the global model will be aggregated by local models. However, training datasets from all clients are usually heterogeneous, because they are chosen by the clients themselves. This leads to over-fitting in the local models, thus affecting the performance of the global model. There are existing methods such as regularization in local optimization and improving model aggregation. However, they introduce additional computing or storage overhead.In this paper, we present a novel system design FedIR to eliminate the impact of data heterogeneity. FedIR introduces the idea of learning invariant features in domain adaptation so that the aggregated global model can handle the data heterogeneity well. We refer to the Optimal Path Search to assist model training in obtaining better invariant representations. The modifications to local model structures are very small, with little impact on local training and server aggregation. Extensive experiments demonstrate that FedIR achieves state-of-the-art performance on popular federated learning benchmarks including CIFAR-10 and CIFAR-100, with less computation cost and communication rounds.