Abstract: |
This work presents a novel approach to Federated Learning (FL), a collaborative learning model that leverages data distributed across numerous clients. We establish a duality connection between the widely studied FL problem and the parallel subspace correction problem, leading to the development of our accelerated FL algorithm, DualFL-CS. By employing a novel randomized coordinate descent method, our algorithm effectively incorporates client sampling and allows for the use of inexact local solvers, thereby reducing computational costs in both smooth and non-smooth cases. For smooth FL problems, DualFL-CS achieves optimal linear convergence rates, while for non-smooth problems, it attains accelerated sub-linear convergence rates. Numerical experiments demonstrate the superior performance of our algorithm compared to existing state-of-the-art FL algorithms. |
|