Special Session 34: 

Decentralized consensus algorithms with network independent stepsize

Ming Yan
Michigan State University
USA
Co-Author(s):    Zhi Li, Wei Shi
Abstract:
We consider the problem of decentralized optimization with a composite objective containing smooth and non-smooth terms. To solve the problem, a proximal-gradient scheme is studied. Specifically, the smooth and nonsmooth terms are dealt with by gradient update and proximal update, respectively. The studied algorithm is closely related to a previous decentralized optimization algorithm, PG-EXTRA, but has a few advantages. First of all, in our new scheme, agents use uncoordinated step-sizes and the stable upper bounds on step-sizes are independent from network topology. The step-sizes depend on local objective functions, and they can be as large as that of the gradient descent. Secondly, for the special case without non-smooth terms, linear convergence can be achieved under the strong convexity assumption. The dependence of the convergence rate on the objective functions and the network are separated, and the convergence rate of our new scheme is as good as one of the two convergence rates that match the typical rates for the general gradient descent and the consensus averaging. We also provide some numerical experiments to demonstrate the efficacy of the introduced algorithms and validate our theoretical discoveries.