login
Home / Papers / Decentralized Federated Learning via Mutual Knowledge Transfer

Decentralized Federated Learning via Mutual Knowledge Transfer

116 Citations2021
Chengxi Li, Gang Li, Pramod K. Varshney

The proposed Def-KT algorithm significantly outperforms the baseline DFL methods with model averaging, i.e., Combo and FullAvg, especially when the training data are not independent and identically distributed (non-IID) across different clients.

Abstract

In this paper, we investigate the problem of decentralized federated learning\n(DFL) in Internet of things (IoT) systems, where a number of IoT clients train\nmodels collectively for a common task without sharing their private training\ndata in the absence of a central server. Most of the existing DFL schemes are\ncomposed of two alternating steps, i.e., model updating and model averaging.\nHowever, averaging model parameters directly to fuse different models at the\nlocal clients suffers from client-drift especially when the training data are\nheterogeneous across different clients. This leads to slow convergence and\ndegraded learning performance. As a possible solution, we propose the\ndecentralized federated earning via mutual knowledge transfer (Def-KT)\nalgorithm where local clients fuse models by transferring their learnt\nknowledge to each other. Our experiments on the MNIST, Fashion-MNIST, CIFAR-10,\nand CIFAR-100 datasets reveal that the proposed Def-KT algorithm significantly\noutperforms the baseline DFL methods with model averaging, i.e., Combo and\nFullAvg, especially when the training data are not independent and identically\ndistributed (non-IID) across different clients.\n