Are neural networks convergent regularisation methods?MS50

We propose a generic Fejér-monotonicity result for neural network architectures with added constraints. This result can be interpreted as a first step towards establishing an analogy between neural network architectures and convergent, iterative regularisation methods, and therefore help to stabilise the neural network architectures. We apply our result to architectures such as variational networks and the learned iterative soft-thresholding algorithm (LISTA) and demonstrate how the theory dictates additional constraints for the learning of the network parameters. This is joint work with Thomas Pock and Peter Maaß.

This presentation is part of Minisymposium “MS50 - Analysis, Optimization, and Applications of Machine Learning in Imaging (3 parts)
organized by: Michael Moeller (University of Siegen) , Gitta Kutyniok (Technische Universität Berlin) .

Martin Benning (University of Cambridge)
deep learning, inverse problems, machine learning, nonlinear optimization