FedERA, a modular and fully customizable open-source FL framework aiming to address issues of support on heterogeneous edge devices and by incorporating both standalone and distributed training approaches is presented.
Federated Learning (FL) is a distributed machine learning (ML) approach that allows multiple devices to train a model while keeping training data private on edge devices. However, existing FL libraries are incapable of: (i) supporting a wide range of existing FL algorithms, (ii) handling diverse public as well as custom datasets, (iii) discarding unreliable updates from malicious edge devices, (iv) facilitating edge-device training, and (v) implementing model compression to reduce communication overhead. With the rapid progression in AI, there is an increment in the computation power while training the models on edge devices which results in more carbon emission (CE). However, in most of the existing FL frameworks, there is no module for estimating CE. In this paper, we present FedERA, a modular and fully customizable open-source FL framework aiming to address these issues especially also by offering extensive support on heterogeneous edge devices and by incorporating both standalone and distributed training approaches. The integration of new software modules unforeseen in existing FL frameworks not only amplifies the scope of its usability but also fosters environment-friendly FL. We believe that FedERA will be a valuable tool for researchers and practitioners working in FL.