Breaking the Curse of Dimensionality with Convex Neural NetworksMS50

We consider neural networks with a single hidden layer and non-decreasing positively homogeneous activation functions like the rectified linear units, and study the statistical and computational properties in a high-dimensional set-up, where the number of hidden units grow unbounded. While these networks are provably adaptive to low-dimensional structures, they require an efficient solver for the non-convex subproblem of addition of a new unit.

This presentation is part of Minisymposium “MS50 - Analysis, Optimization, and Applications of Machine Learning in Imaging (3 parts)
organized by: Michael Moeller (University of Siegen) , Gitta Kutyniok (Technische Universität Berlin) .

Francis Bach (Departement d'Informatique de l'Ecole Normale Superieure Centre de Recherche INRIA de Paris)
deep learning, machine learning, nonlinear optimization