This work proposes FedOMD – an online FL algorithm where, akin to the offline setting, clients perform multiple local processing steps before uploading their model predictions to the server, and proves sublinear regret bounds that match their centralized counterparts (up to constants) for both convex and strongly convex losses.
Federated learning (FL) has recently emerged as a popular framework for training a model via periodic coordination between a set of clients and a central server. The training task is abstracted as an optimization problem and solved under the premise that clients have access to their data samples offline, and that such samples are generated statistically. Departing from this paradigm, we initiate the study of FL in uncertain environments, where the clients’ local loss functions arrive in an online, streaming manner, and are revealed only once the clients make their model predictions. Moreover, unlike the standard FL setting, we make no statistical assumptions on the loss functions, and our performance measure of interest is an appropriately defined collective regret metric. To minimize this regret metric in a communication-efficient manner, we propose FedOMD – an online FL algorithm where, akin to the offline setting, clients perform multiple local processing steps before uploading their model predictions to the server. Crucially, FedOMD differs from existing FL algorithms in the nature of its processing step. We (i) prove sublinear regret bounds for FedOMD that match their centralized counterparts (up to constants) for both convex and strongly convex losses; and (ii) use our regret guarantees to derive high probability excess risk bounds that characterize the generalization ability of FedOMD. Our analysis reveals in a precise way the trade-offs between intermittent communication and performance measures such as regret and excess risk.