Memory-Optimal Deep Neural NetworksMS36

In this talk, we will be concerned with the question, how well a function, which for instance encodes a classification task, can be approximated by a neural network with sparse connectivity. We will derive a fundamental lower bound on the sparsity of a neural network independent on the learning algorithm, and also demonstrate how networks can be constructed which attain this bound, leading to memory-optimal deep neural networks.

This presentation is part of Minisymposium “MS36 - Computational Methods for Large-Scale Machine Learning in Imaging (2 parts)
organized by: Matthias Chung (Virginia Tech) , Lars Ruthotto (Department of Mathematics and Computer Science, Emory University) .

Gitta Kutyniok (Technische Universität Berlin)
Helmut Bölcskei (ETH Zürich)
Philipp Grohs (Universität Wien)
Philipp Petersen (Technische Universität Berlin)
deep learning