The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy
Generative Adversary Network (GAN) converts a fixed distribution in the latent space to the data distribution in the image space, which can be interpreted using optimal mass transportation framework and governed by Monge-Ampere equation. In turn, L2 optimal transportation has close relation with convex geometry. In this talk, we expose this relation and give a geometric method for Wasserstein-GAN. Furthermore, we explain the following questions: Does a deep neural network learn a function or a probability measure ? Does the neural networks really memorize the samples or learn ? Why neural networks are easily fooled ?
This presentation is part of Minisymposium “MS29 - Geometry and Learning in 3D Shape Analysis”
organized by: Ronald Lui (Chinese University of Hong Kong) , Rongjie Lai (Rensselaer Polytechnic Institute) .