Prediction Methods for training Generative Image ModelsMS50

Adversarial neural networks solve many important problems in image science, and can be used to build sophisticated image models and priors. However, these models are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points. This makes adversarial networks less likely to "collapse", and enables faster training with larger learning rates.

This presentation is part of Minisymposium “MS50 - Analysis, Optimization, and Applications of Machine Learning in Imaging (3 parts)
organized by: Michael Moeller (University of Siegen) , Gitta Kutyniok (Technische Universität Berlin) .

Tom Goldstein (University of Maryland)
Zheng Xu (University of Maryland)
Sohil Shah (University of Maryland)
Abhay Kumar (University of Maryland)
computer vision, deep learning, image reconstruction, image representation, machine learning, nonlinear optimization