Exploiting Low-Quality Visual Data using Deep NetworksMS24

While many sophisticated models are developed for visual information processing, very few pay attention to their usability in the presence of data quality degradations. Most successful models are trained and evaluated on high quality visual datasets. On the other hand, the data source often cannot be assured of high quality in practical scenarios. For example, video surveillance systems have to rely on cameras of very limited definitions, due to the prohibitive costs of installing high-definition cameras all around, leading to the practical need to recognize objects reliably from very low resolution images. Other quality factors, such as occlusion, motion blur, missing data and bad weather conditions, are also ubiquitous in the wild. The seminar will present a comprehensive and in-depth review, on the recent advances in the robust sensing, processing and understanding of low-quality visual data, using deep learning methods. I will mainly show how the image/video restoration and the visual recognition could be jointly optimized as one pipeline. Such an end-to-end optimization consistently achieves the superior performance over the traditional multi-stage pipelines. I will also demonstrate how our proposed approach largely improves a number of real-world applications.

This presentation is part of Minisymposium “MS24 - Data-driven approaches in imaging science (3 parts)
organized by: Jae Kyu Choi (Institute of Natural Sciences, Shanghai Jiao Tong University) , Chenglong Bao (Yau Mathematical Sciences Center, Tsinghua University) .

Zhangyang Wang (Department of Computer Science and Engineering, Texas A&M University (TAMU))
computer vision, deep learning, image deblurring, image enhancement, inverse problems, machine learning