Dive into the world of Federated Learning with our curated list of top research papers. These studies offer insights into the latest advancements, challenges, and solutions in decentralized machine learning. Stay ahead in the field by exploring these pioneering works.
Looking for research-backed answers?Try AI Search
Jianyong Tuo, Kailun Shang, Xin Ma + 3 more
2024 IEEE 7th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC)
A novel federated aggregation method, Federated Error Average (FedEA), which fuses federated learning with wav2vec2.0, a renowned self-supervised pre-trained speech recognition model to enable the aggregation model to address scenarios where malicious users introduce substantial interference data effectively.
Di Chai, Leye Wang, Liu Yang + 3 more
journal unavailable
This paper proposes a holistic evaluation framework for FL called FedEval, and presents a benchmarking study on seven state-of-the-art FL algorithms, and develops an FL evaluation platform with standardized evaluation settings and easy-to-use interfaces.
Ming Hu, Zhihao Yue, Zhiwei Ling + 5 more
ArXiv
Experimental results show that, compared with state-of-the-art FL methods, FedMR can improve inference accuracy in a quicker manner without exposing client privacy.
Xi Zheng, Hongcheng Xie, Yu Guo + 1 more
2023 19th International Conference on Mobility, Sensing and Networking (MSN)
FedIR introduces the idea of learning invariant features in domain adaptation so that the aggregated global model can handle the data heterogeneity well and refers to the Optimal Path Search to assist model training in obtaining better invariant representations.
Vikash Sathiamoorthy, Shuo Huai, Hao Kong + 7 more
Proceedings of the Great Lakes Symposium on VLSI 2024
FedTR is introduced, a novel FL framework incorporating transfer learning designed for Autonomous IVI, focusing on the challenging task of identifying label defects through end-to-end text recognition, and attains performance levels that are on par with those achieved through centralized training.
Hengrun Zhang, K. Zeng, Shuai Lin
IEEE Transactions on Signal Processing
This paper fully investigates centralized learning-based adaptive learning strategies, and proposes an adaptive Federated learning algorithm targeting the model parameter Update Rule, called FedUR, which can effectively increase final convergence accuracy with even lower communication overhead requirement.
Ningxin Su, Ba. Li, Bo Li
2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems (MASS)
Federated learning (FL) is a widely acknowledged distributed training paradigm that preserves the privacy of data on participating clients, and has become the de facto standard for distributed machine learning across a large number of edge devices. Conventional FL, however, has a rather rigid design, where the server is the dominant player that selects a subset of its clients to participate in each communication round, and clients are merely followers, and are not offered the freedom to accept or decline invitations from the server to participate. In addition, clients may become unavailable or...
Suyu Ge, Fangzhao Wu, Chuhan Wu + 3 more
arXiv: Computation and Language
A privacy-preservingmedical NER method based on federated learning, which can leverage the labeled data in different platforms to boost the training of medical NER model and remove the need of exchanging raw data among different platforms.
Wenjing Qin, Li Yang, Jianfeng Ma
2021 IEEE Global Communications Conference (GLOBECOM)
A new Paillier homomorphic encryption to design a new gradient security replacement algorithm, which eliminates the connections between gradient parameters and user sensitive data and greatly reduces the user's local computing overhead.
Hong Peng, Tongtong Wu, Zhenkui Shi + 1 more
2023 IEEE Symposium on Computers and Communications (ISCC)
Experiments show that FedEF is an effective solution to FL model obtained under heterogeneous and local class imbalance, and modified the cross entropy loss in the model, assigned different loss weights to different classes of data, and corrected the biased classifier to alleviate the problem of class imbalance.
Devilliers Caleb Dube, Ç. Erdem, Ömer Korçak
journal unavailable
This paper proposes novel data-based and client-based curriculum learning (CL) approaches for federated FR intending to improve the performance of generic and client-specific personalized models and evaluation of the proposed algorithm shows an improved performance when CL is utilized during training.
Scott Kuzdeba
journal unavailable
This talk explores how the heterogeneity that surrounds the potential of federated learning can be tapped into to build better models, including the heterogeneity from device hardware to training paradigms (supervised, unsupervised, reinforcement, self-supervised).
Now the authors are in an era of technology transformation in their everyday life, where data play a key role in the decision making and bringing the action into reality.
Anvesh Gunuganti
Journal of Artificial Intelligence & Cloud Computing
Optimization, privacy, and novelty areas of FL are the areas for further study in the field, as per the conclusion of the study.
To make Federated Learning feasible, this thesis proposes changes to the optimization process and explains how dedicated compression methods can be employed and the use of Differential Privacy techniques can be ensured that sending weight updates does not leak significant information about individuals.
Qiang Yang, Yang Liu, Yong Cheng + 3 more
Synthesis Lectures on Artificial Intelligence and Machine Learning
It is shown how federated learning can become the foundation of next-generation machine learning that caters to technological and societal needs for responsible AI development and application.
Behnaz Soltani, Yipeng Zhou, Venus Haghighi + 1 more
journal unavailable
This paper provides the first comprehensive survey of existing federated evaluation methods and explores various applications of federated evaluated methods for enhancing FL performance and presents future research directions by envisioning some challenges.
Yae Jee Cho, Divyansh Jhunjhunwala, Tian Li + 2 more
ArXiv
An algorithm called IncFL is proposed that explicitly maximizes the fraction of clients who are incentivized to use the global model by dynamically adjusting the aggregation weights assigned to their updates, and can also improve the generalization performance of theglobal model on unseen clients.
Tianqi Su, Meiqi Wang, Zhongfeng Wang
2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
A novel federated learning model that can protect data privacy from the gradient leakage attack and black-box membership inference attack and can successfully defend diverse external attacks to user-level privacy with negligible accuracy loss is proposed.
Henger Li, Chen Wu, Senchun Zhu + 1 more
ArXiv
This work proposes a general reinforcement learning-based backdoor attack framework where the attacker first trains a (non-myopic) attack policy using a simulator built upon its local data and common knowledge on the FL system, which is then applied during actual FL training.
Sai Praneeth Karimireddy, Narasimha Raghavan Veeraragavan, Severin Elvatun + 1 more
2023 Eighth International Conference on Fog and Mobile Edge Computing (FMEC)
This position paper presents a preliminary comparative analysis of 14 different Federated Learning frameworks, assessing their individual strengths and weaknesses and advocates for a more methodical understanding and selection of FL frameworks, which it believes will substantially benefit both practical applications and future advancements in the field.
Yifei Zeng, Kai Zhao, Fangchao Yu + 3 more
2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)
The proposed FedGR framework groups clients according to their local data distribution using genetic algorithm and relay strategy and significantly outperforms other state-of-the-art federated learning algorithms on various image classification tasks.
A. Nguyen, Philip H. S. Torr, S. Lim
journal unavailable
This paper proposes a simple yet novel representation learning framework, namely FedSR, which enables domain generalization while still respecting the decentralized and privacy-preserving natures of this FL setting and provides theoretical connections between the above two objectives and representation alignment in domain generalization.
Di Chai, Leye Wang, Kai Chen + 1 more
ArXiv
This paper introduces the ACTPR model, which defines five metrics that cannot be excluded in FL evaluation, including Accuracy, Communication, Time efficiency, Privacy, and Robustness, and designs and implements a benchmarking system called FedEval, which enables the systematic evaluation and comparison of existing works under consistent experimental conditions.
Shuo Han, Chenyu Zhang, Luhan Wang + 2 more
2023 IEEE 34th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)
This paper proves the effect of Federated Learning (FL) on lowering computing energy consumption and proposes an FL algorithm for the energy-efficiency-ratio-based client selection (FedECS) and reduces the total energy consumption of FL through the trade-off of iteration rounds and energy consumption per round.
Yongxin Guo, Xiaoying Tang, Tao Lin
journal unavailable
This work proposes FedBR, a novel unified algorithm that reduces the local learning bias on features and classifiers to tackle the challenges caused by local updates in supervised FL.
M. Farooq, Thanh Tung Vu, H. Ngo + 1 more
IEEE Transactions on Wireless Communications
The massive multiple-input multiple-output technology is adopted and the FD-based scheme outperforms the HD-based counterpart in scenarios where the self-interference is small or moderate and/or the size of FL model updates is large.
Tingting Zhang, Mei Cao, Zhenge Jia + 4 more
journal unavailable
. Federated learning (FL) is a distributed machine learning approach that allows multiple devices or computing nodes to jointly train models without sharing raw data. However, in real-world application scenarios, FL usually encounters a critical challenge of data heterogeneity. Recent studies have revealed that the client’s model suffers severe bias between the local model and global model, leading to global performance degradation. Improving the generalization of local learning would inherently reduce bias. It has been proved that self-paced learning on a single device can greatly achieve a b...
Daniel Garcia Bernal, Lodovico Giaretta, Sarunas Girdzijauskas + 1 more
ArXiv
This work shows the viability of training NLP models, specifically Word2Vec, with the Federated Learning protocol, and shows that neither the quality of the results nor the convergence time in Federated Word 2Vec deteriorates as compared to centralised Word2 Vec.
Jonathan Scott, Hossein Zakerinia, Christoph H. Lampert
journal unavailable
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clien...
Yang Liu, Zhuo Ma, Ximeng Liu + 3 more
ArXiv
In RevFRF, a suite of homomorphic encryption based secure protocols are designed for federated RF construction, prediction and revocation and it is shown that the protocols can securely and efficiently implement collaborative training of an RF and ensure that the memories of a revoked participant in the trained RF are securely removed.
K. Singhal, Hakim Sidahmed, Zachary Garrett + 3 more
journal unavailable
Federated Reconstruction is introduced, the first model-agnostic framework for partially local federated learning suitable for training and inference at scale and an open-source library is released for evaluating approaches in this setting.
P. Rieger, T. Krauß, Markus Miettinen + 2 more
Proceedings 2024 Network and Distributed System Security Symposium
A novel defense mechanism, CrowdGuard, is presented that effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques and leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme.
A comprehensive survey of the unique security vulnerabilities exposed by the FL ecosystem is provided, highlighting the vulnerabilities sources, key attacks on FL, defenses, as well as their unique challenges, and discussing promising future research directions towards more robust FL.
FML allows clients designing their customized models and training independently, thus the Non-IIDness of data is no longer a bug but a feature that clients can be personally served better and achieve better performance, robustness and communication efficiency than alternatives.
Kallista A. Bonawitz, P. Kairouz, H. B. McMahan + 1 more
Queue
Key concepts in federated learning and analytics are introduced with an emphasis on how privacy technologies may be combined in real-world systems and how their use charts a path toward societal benefit from aggregate statistics in new domains and with minimized risk to individuals and to the organizations who are custodians of the data.
A new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model can be minimized while still gaining all the performance benefits that joint training provides.
Elsa Rizk, Stefan Vlaski, A. H. Sayed
2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)
This work considers a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data, and establishes that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
This work designs a feature-oriented regulation method to ensure explicit feature information allocation in different neural network structures and proposes a novel federated learning framework to resolve this issue by establishing a firm structure-information alignment across collaborative models.
It is theoretically proved that better models will be significant with better models, and it is theoretically proven that will as they improve.
Akul Agrawal, Divya D. Kulkarni, S. B. Nair
2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
An approach to de-centralize FL using mobile agents coupled with the Federated Averaging (FedAvg) algorithm is presented and results obtained by running the model on different network topologies indicate that the hybrid version proves to be the better option for an FL implementation.
Y. Sarcheshmehpour, Yu Tian, Linli Zhang + 1 more
journal unavailable
This work develops the theory and algorithmic toolbox for networked federated learning in decentralized collections of local datasets with an intrinsic network structure and reveals an interesting interplay between the convex geometry of local models and the (cluster-) geometry of their network structure.
FedPAC is presented, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity that allows us to constrain the solution space of multiple objectives to a shared dimension with the help of a single-objective optimization algorithm.
Tiantian Feng, Anil Ramakrishna, Jimit Majmudar + 6 more
ArXiv
This work proposes a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities or their intermediate representations can be made available to the server and restricts the egress of data labels to the cloud for better privacy.
Proposed methods adopt adaptivity by using a novel covariance matrix preconditioner and achieve state-of-the-art performances on both i.i.d. and non-i.D. settings, providing convergence guarantees for the algorithms.
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
Kallista A. Bonawitz, P. Kairouz, H. B. McMahan + 1 more
Communications of the ACM
Building privacy-preserving systems for machine learning and data science on decentralized data.
Li Li, Yuxi Fan, Kuo-Yi Lin
2020 IEEE 16th International Conference on Control & Automation (ICCA)
This study aims to review related studies of FL to base on the baseline a universal definition gives a guiding for the future work and identifies four research fronts to enrich the FL literature and help advance the understanding of the field.
A. Mitra, Hamed Hassani, George Pappas
2021 60th IEEE Conference on Decision and Control (CDC)
This work proposes FedOMD – an online FL algorithm where, akin to the offline setting, clients perform multiple local processing steps before uploading their model predictions to the server, and proves sublinear regret bounds that match their centralized counterparts (up to constants) for both convex and strongly convex losses.
S. Dhakal, Saurav Prakash, Yair Yona + 2 more
2019 IEEE Globecom Workshops (GC Wkshps)
This paper develops a novel coded computing technique for federated learning to mitigate the impact of stragglers and shows that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.