Dive into the world of Federated Learning with our curated list of top research papers. These studies offer insights into the latest advancements, challenges, and solutions in decentralized machine learning. Stay ahead in the field by exploring these pioneering works.
Looking for research-backed answers?Try AI Search
Anvesh Gunuganti
Journal of Artificial Intelligence & Cloud Computing
Optimization, privacy, and novelty areas of FL are the areas for further study in the field, as per the conclusion of the study.
To make Federated Learning feasible, this thesis proposes changes to the optimization process and explains how dedicated compression methods can be employed and the use of Differential Privacy techniques can be ensured that sending weight updates does not leak significant information about individuals.
Qiang Yang, Yang Liu, Yong Cheng + 3 more
IEEE Consumer Electronics Magazine
It is shown how federated learning can become the foundation of next-generation machine learning that caters to technological and societal needs for responsible AI development and application.
Now the authors are in an era of technology transformation in their everyday life, where data play a key role in the decision making and bringing the action into reality.
A. Mitra, Hamed Hassani, George Pappas
2021 60th IEEE Conference on Decision and Control (CDC)
This work proposes FedOMD – an online FL algorithm where, akin to the offline setting, clients perform multiple local processing steps before uploading their model predictions to the server, and proves sublinear regret bounds that match their centralized counterparts (up to constants) for both convex and strongly convex losses.
S. Dhakal, Saurav Prakash, Yair Yona + 2 more
2019 IEEE Globecom Workshops (GC Wkshps)
This paper develops a novel coded computing technique for federated learning to mitigate the impact of stragglers and shows that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
A cheap, simple and intuitive sampling scheme which reduces the number of required training iterations by 20-70% while maintaining the same model accuracy, and which mimics well known resampling techniques under certain conditions is proposed.
Y. Sarcheshmehpour, Yu Tian, Linli Zhang + 1 more
journal unavailable
This work develops the theory and algorithmic toolbox for networked federated learning in decentralized collections of local datasets with an intrinsic network structure and reveals an interesting interplay between the convex geometry of local models and the (cluster-) geometry of their network structure.
Xinchi Qiu, Titouan Parcollet, Daniel J. Beutel + 3 more
arXiv: Learning
A rigorous model to quantify the carbon footprint of FL is proposed, hence facilitating the investigation of the relationship between FL design and carbon emissions, and a comparison to traditional centralized learning is compared.
The concept of Green FL is proposed, which involves optimizing FL parameters and making design choices to minimize carbon emissions consistent with competitive performance and training time, and a data-driven approach to quantify the carbon emissions of FL by directly measuring real-world at-scale FL tasks running on millions of phones is adopted.
Qiang Yang, Yang Liu, Tianjian Chen + 1 more
ACM Transactions on Intelligent Systems and Technology (TIST)
This work introduces a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federatedLearning, and federated transfer learning, and provides a comprehensive survey of existing works on this subject.
Tiantian Feng, Anil Ramakrishna, Jimit Majmudar + 6 more
ArXiv
This work proposes a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities or their intermediate representations can be made available to the server and restricts the egress of data labels to the cloud for better privacy.
Sizhuang He, Maria Han Veiga
journal unavailable
This REU project explored federated learning on I.I, a communication efficient way of learning from de-centralized data that is believed to protect data privacy.
FML allows clients designing their customized models and training independently, thus the Non-IIDness of data is no longer a bug but a feature that clients can be personally served better and achieve better performance, robustness and communication efficiency than alternatives.
FedPAC is presented, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity that allows us to constrain the solution space of multiple objectives to a shared dimension with the help of a single-objective optimization algorithm.
This paper proposes a practical federated learning framework that leverages intermittent energy arrivals for training, with provable convergence guarantees, and can be applied to a wide range of machine learning settings in networked environments, including distributed and Federated learning in wireless and edge networks.
Li Li, Yuxi Fan, Kuo-Yi Lin
2020 IEEE 16th International Conference on Control & Automation (ICCA)
This study aims to review related studies of FL to base on the baseline a universal definition gives a guiding for the future work and identifies four research fronts to enrich the FL literature and help advance the understanding of the field.
Boyuan Li, Shengbo Chen, Zihao Peng
Sensors (Basel, Switzerland)
A rigorous mathematical representation of this framework is given, several major challenges faced under this framework are addressed, and the main challenges of combining incremental learning with federated learning are addressed.
This study proposes Meta Federated Learning (Meta-FL), a novel variant of federated learning which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
The method consistently improves the base algorithms with various numbers of local training epochs and increases the total number of rounds to 500 to ensure convergence.