The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy
In this talk, we will be concerned with the question, how well a function, which for instance encodes a classification task, can be approximated by a neural network with sparse connectivity. We will derive a fundamental lower bound on the sparsity of a neural network independent on the learning algorithm, and also demonstrate how networks can be constructed which attain this bound, leading to memory-optimal deep neural networks.
This presentation is part of Minisymposium “MS36 - Computational Methods for Large-Scale Machine Learning in Imaging (2 parts)”
organized by: Matthias Chung (Virginia Tech) , Lars Ruthotto (Department of Mathematics and Computer Science, Emory University) .