A novel generative model to synthesize realistic training imagesPP

Computer vision is ever-increasingly leveraging on deep learning to advance the state-of-the-art in the most challenging tasks. However, these approaches require huge datasets of annotated images to pursue the training process. As manually obtaining such datasets is tedious and expensive, recent works have proposed to generate synthetic training images by state-of-the-art computer graphics techniques. Yet, such synthetic training samples turn out significantly different from the real images processed at test time, which implies a well-known issue referred to in the machine learning literature as domain shift. We propose to mitigate such shift by learning a generative model that can solve a domain-to-domain translation problem between synthetic and real data. Peculiar to our approach is the use of semantic cues as a constraint during the image translation process, i.e., according to our formulation, the semantic structure of a translated image should be indistinguishable from that of the corresponding input image. This formulation allow us to shrink the performance gap between training a deep learning model on expensive real data and leveraging on easily attainable synthetic data, thereby providing an efficient alternative to manual labeling of training images

This is poster number 43 in Poster Session

Authors:
Luigi Di Stefano (Dept. Computer Science and Engineering, University of Bologna)
Alessio Tonioni (Dept. Computer Science and Engineering, University of Bologna)
Pierluigi Zama Ramirez (Dept. Computer Science and Engineering, University of Bologna)
Keywords: