We have been able to show that neural networks can recover the geometric structure of a human face from a single given image. In this talk I will review the steps we took towards that goal, starting with a simple projection onto the Blanz-Vetter linear model coupled with shape from shading reconstruction that provides the fine significant details. Then, by learning the axiomatic shape from shading into a network that part was also translated to the deep learning world. Finally, I will show how one could remove the restriction of a target linear sub-space to train a wholistic reconstruction network. Based on joint papers with Matan Sela, Elad Richardson, and and Roy Or-El.
This presentation is part of Minisymposium “MS70 - Innovative Challenging Applications in Imaging Sciences (2 parts)”
organized by: Roberto Mecca (University of Bologna and University of Cambridge) , Giulia Scalet (Dept. Civil Engineering and Architecture, University of Pavia) , Federica Sciacchitano (Dept. Mathematics, University of Genoa) .