An approach to de-centralize FL using mobile agents coupled with the Federated Averaging (FedAvg) algorithm is presented and results obtained by running the model on different network topologies indicate that the hybrid version proves to be the better option for an FL implementation.
Federated Learning (FL), a distributed version of Deep Learning (DL), was introduced to tackle the problem of user privacy and huge bandwidth requirements in sending the user data to the company servers that run DL models. FL enables on-device training of the models. Most FL approaches are entirely centralized and suffer from inherent limitations such as single node failure and channel bandwidth bottlenecks. To circumvent these issues, we present an approach to de-centralize FL using mobile agents coupled with the Federated Averaging (FedAvg) algorithm. A hybrid model that combines both centralized and decentralized approaches has also been presented. Results obtained by running the model on different network topologies indicate that the hybrid version proves to be the better option for an FL implementation.