The forward-backward splitting algorithm is nowadays one of the most popular iterative procedures to solve convex minimization problems arising in machine learning and inverse problems. The worst case convergence rate on the function values is well-known to be O(1/n). However, in practice, the forward-backward algorithm is often much faster. The goal of this talk is to discuss that under suitable geometric assumptions, often satisfied in practice, the convergence rates of forward-backward algorithm are much faster.
This presentation is part of Minisymposium “MS13 - Optimization for Imaging and Big Data (2 parts)”
organized by: Margherita Porcelli (University of Firenze) , Francesco Rinaldi (University of Padova) .