Exact spectral-like gradient method for distributed optimizationMS13

We consider unconstrained distributed optimization problems where N agents constitute an arbitrary connected network and collaboratively minimize the sum of their local convex cost functions. In this setting, we develop distributed gradient methods where agents’ step-sizes are designed according to the rules akin to those in spectral gradient methods. The proposed method converges to the exact solution of the aggregate loss function. Numerical performance of the proposed distributed methods is illustrated on several application examples.

This presentation is part of Minisymposium “MS13 - Optimization for Imaging and Big Data (2 parts)
organized by: Margherita Porcelli (University of Firenze) , Francesco Rinaldi (University of Padova) .

Authors:
Nataša Krejić (University of Novi Sad)
Keywords:
distributed optimization, machine learning, nonlinear optimization, spectral gradient method