login
Home / Papers / Learning to Unlearn in Federated Learning

Learning to Unlearn in Federated Learning

88 Citations•2024•
Yixiong Wang, Jalil Taghia, Selim Ickin
2024 2nd International Conference on Federated Learning Technologies and Applications (FLTA)

This paper proposes a solution, named learning to unlearn (LTU), for the effective unlearning of data from clients while ensuring privacy and without the need for retraining or accessing the clients' data.

Abstract

Machine unlearning emerges as a critical concept in federated learning (FL) systems. Due to reasons such as privacy concerns, there might be a necessity to remove the contribution of certain clients from the global model after the model is trained. This paper proposes a solution, named learning to unlearn (LTU), for the effective unlearning of data from clients while ensuring privacy and without the need for retraining or accessing the clients' data. For this unlearning task, we introduce a notion of disentanglement and a global contribution model which is obtained by learning the contribution of each client but also how to disentangle it from the contributions of others. As a result of the enforced disentanglement, once an unlearning request is received for a target client, its contribution can be removed from the global model in a one-step procedure. The method is evaluated in homogeneous and (semi-)heterogeneous data distributions and results provide evidence of approximate unlearning or a reduced capability (as low as -0.3 impact in accuracy) to classify unlearned data.