From optimization to regularization in inverse problems and machine learningMS79

Classical approaches to process and classify data often reduce to designing and minimizing empirical objective functions. The challenge is on the one hand to incorporate the structural information that might be available on the problem at hand. On the other hand to develop optimization schemes that can encompass and exploit such a structure. In this minisymposium we will present state of the art approaches in this sense both in machine learning and inverse problems. The goal is to discuss the interplay between estimation and optimization principles.

Parameter learning for total variation type regularisation schemes
Carola-Bibiane Schönlieb (University of Cambridge)
Inexact variable metric forward-backward methods for convex and nonconvex optimization
Silvia Bonettini (University of Modena and Reggio Emilia)
A Random Block-Coordinate Douglas-Rachford Splitting Method with Low Computational Complexity for Binary Logistic Regression
Emilie Chouzenoux (Université Paris-Est Marne-la-Vallée)
Iterative optimization and regularization: convergence and stability
Lorenzo Rosasco (University of Genoa, Istituto Italiano di Tecnologia; Massachusetts Institute of Technology)
Organizers:
Lorenzo Rosasco (University of Genoa, Istituto Italiano di Tecnologia; Massachusetts Institute of Technology)
Silvia Villa (Politecnico di Milano)
Keywords:
convex optimization, inverse problems, iterative regularization, machine learning, nonlinear optimization, parameter learning, splitting methods