Beyond the worst case convergence analysis of the forward-backward algorithmMS13

The forward-backward splitting algorithm is nowadays one of the most popular iterative procedures to solve convex minimization problems arising in machine learning and inverse problems. The worst case convergence rate on the function values is well-known to be O(1/n). However, in practice, the forward-backward algorithm is often much faster. The goal of this talk is to discuss that under suitable geometric assumptions, often satisfied in practice, the convergence rates of forward-backward algorithm are much faster.

This presentation is part of Minisymposium “MS13 - Optimization for Imaging and Big Data (2 parts)
organized by: Margherita Porcelli (University of Firenze) , Francesco Rinaldi (University of Padova) .

Silvia Villa (Politecnico di Milano)
Lorenzo Rosasco (University of Genoa, Istituto Italiano di Tecnologia; Massachusetts Institute of Technology)
Guillaume Garrigos (Ecole Normale Supérieure, CNRS)
conditioning, convex optimization, forward-backward splitting, inverse problems, machine learning, nonlinear optimization