In this talk we focus on variable metric forward-backward algorithms, where the metric underlying the backward/proximal step may change at each iteration to better capture the local features of the objective function and constraints. The main theoretical convergence properties will be described in the convex and nonconvex case and practical implementation issues will be discussed.
This presentation is part of Minisymposium “MS79 - From optimization to regularization in inverse problems and machine learning”
organized by: Silvia Villa (Politecnico di Milano) , Lorenzo Rosasco (University of Genoa, Istituto Italiano di Tecnologia; Massachusetts Institute of Technology) .