Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges
An extensive review of the threats of federated learning, as well as as their corresponding countermeasures, attacks versus defences, and guidelines for selecting the most adequate defence method according to the category of the adversarial attack are presented.
Abstract
Federated learning is a machine learning paradigm that emerges as a solution to the \nprivacy-preservation demands in artificial intelligence. As machine learning, federated \nlearning is threatened by adversarial attacks against the integrity of the learning model \nand the privacy of data via a distributed approach to tackle local and global learning. \nThis weak point is exacerbated by the inaccessibility of data in federated learning, which \nmakes harder the protection against adversarial attacks and evidences the need to furtherance \nthe research on defence methods to make federated learning a real solution for \nsafeguarding data privacy. In this paper, we present an extensive review of the threats \nof federated learning, as well as as their corresponding countermeasures, attacks versus \ndefences. This survey provides a taxonomy of adversarial attacks and a taxonomy of defence \nmethods that depict a general picture of this vulnerability of federated learning and \nhow to overcome it. Likewise, we expound guidelines for selecting the most adequate \ndefence method according to the category of the adversarial attack. Besides, we carry \nout an extensive experimental study from which we draw further conclusions about the \nbehaviour of attacks and defences and the guidelines for selecting the most adequate defence \nmethod according to the category of the adversarial attack. This study is finished \nleading to meditated learned lessons and challenges.