Analysis, Optimization, and Applications of Machine Learning in ImagingMS50

Energy minimization methods have been among the most powerful tools for tackling ill-posed image processing problems. They are extremely versatile, are able to model the data formation process explicitly, and allow a detailed mathematical analysis of the solution properties. An alternative approach is to consider a parameterized function, a network, that directly maps from the input data to the desired solution and try to learn the optimal parameters of this mapping on a set of training data. While such learning based approaches have recently outperformed energy minimization methods on many image processing problems, several challenging mathematical questions regarding their training as well as the analysis and control of the produced outputs are not well-understood yet. The goal of this minisymposium is to bring together experts from the fields of machine learning, image processing, and optimization to discuss novel approaches to the design of networks, the solution of the nested non-convex and non-smooth optimization problems to be solved for their training, and the analysis of the resulting solutions.

PART 1
Global Optimality in Matrix, Tensor Factorization, and Deep Learning
Rene Vidal (Johns Hopkins University)
Texture modeling with scattering transform
Sixin Zhang (Ecole Normale Supérieure Paris)
Learning for Compressed Sensing CT Reconstruction
Erich Kobler (Graz University of Technology)
Deep inversion: convolutional neural networks meet neuroscience
Christoph Brune (University of Twente)
PART 2
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis Bach (Departement d'Informatique de l'Ecole Normale Superieure Centre de Recherche INRIA de Paris)
Prediction Methods for training Generative Image Models
Tom Goldstein (University of Maryland)
An Optimal Control Framework for Efficient Training of Deep Neural Networks
Lars Ruthotto (Department of Mathematics and Computer Science, Emory University)
Are neural networks convergent regularisation methods?
Martin Benning (University of Cambridge)
PART 3
Fast Learning and Inference for Computational Imaging
Peyman Milanfar (Google Research)
Unraveling the mysteries of stochastic gradient descent on deep networks
Pratik Chaudhari (University of California, Los Angeles, UCLA)
Proximal Backpropagation
Thomas Frerix (Technical University of Munich)
A shearlet-based deep learning approach to limited-angle tomography
Gitta Kutyniok (Technische Universität Berlin)
Organizers:
Gitta Kutyniok (Technische Universität Berlin)
Michael Moeller (University of Siegen)
Keywords:
deep learning, machine learning, nonlinear optimization