Home / Papers / Federated PAC Learning

Federated PAC Learning

1 Citations2023
Xiaojin Zhang, Qian Yang
journal unavailable

FedPAC is presented, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity that allows us to constrain the solution space of multiple objectives to a shared dimension with the help of a single-objective optimization algorithm.

Abstract

Federated learning (FL) is a distributed learning framework with privacy, utility, and efficiency as its primary concerns. Existing research indicates that it is unlikely to simultaneously achieve privacy protection, utility and efficiency. How to find an optimal trade-off for the three factors is a key consideration for trustworthy federated learning. Is it possible to cast the trade-off as a multi-objective optimization problem in order to minimize the utility loss and efficiency reduction while constraining the privacy leakage? Existing multi-objective optimization frameworks are not suitable for this goal because they are time-consuming and provide no guarantee for the existence of the Pareto frontier for solutions. This challenge motivates us to seek an alternative solution to transform the multi-objective problem into a single-objective problem, which can potentially be more cost-effective and optimal. To this end, in this paper, we present FedPAC, a unified framework that leverages PAC learning to quantify multiple objectives in terms of sample complexity. This unification provides a quantification that allows us to constrain the solution space of multiple objectives to a shared dimension. We can then solve the problem with the help of a single-objective optimization algorithm. Specifically, we provide the theoretical results and detailed analyses on how to quantify the utility loss, privacy leakage, and efficiency, and show that their trade-off can be attained at the maximal cost on potential attackers from the PAC learning perspective.