Optimization in inverse problems via iterative regularizationMS13

In the context of linear inverse problems, we propose and study a general iterative regularization method allowing to consider large classes of regularizers and data-fit terms. We are particularly motivated by dealing with non-smooth data-fit terms, such like a Kullback-Liebler divergence, or an L1 distance. We treat these problems by designing an algorithm, based on a primal-dual diagonal descent method, designed to solve hierarchical optimization problems. The key point of our approach is that, in presence of noise, the number of iterations of our algorithm acts as a regularization parameter. In practice this means that the algorithm must be stopped after a certain number of iterations. This is what is called regularization by early stopping, an approach which gained in popularity in statistical learning. Our main results establishes convergence and stability of our algorithm, and are illustrated by experiments on image denoising, comparing our approach with a more classical Tikhonov regularization method.

This presentation is part of Minisymposium “MS13 - Optimization for Imaging and Big Data (2 parts)
organized by: Margherita Porcelli (University of Firenze) , Francesco Rinaldi (University of Padova) .

Authors:
Guillaume Garrigos (Ecole Normale Supérieure, CNRS)
Silvia Villa (Politecnico di Milano)
Lorenzo Rosasco (University of Genoa, Istituto Italiano di Tecnologia; Massachusetts Institute of Technology)
Keywords:
image deblurring, inverse problems, nonlinear optimization, regularization