Online Convolutional Dictionary Learning for Multimodal ImagingMS40

Computational imaging methods that can exploit multiple modalities have the potential to enhance traditional sensing systems. We propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation regularization. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets. We illustrate the benefit of our approach for joint intensity-depth imaging.

This presentation is part of Minisymposium “MS40 - Recent Advances in Convolutional Sparse Representations
organized by: Giacomo Boracchi (Politecnico di Milano) , Alessandro Foi (Tampere University of Technology) , Brendt Wohlberg (Los Alamos National Laboratory) .

Authors:
Ulugbek Kamilov (Washington University in St. Louis)
Kévin Degraux (Université catholique de Louvain)
Keywords:
convolutional dictionary learning, image reconstruction, machine learning, nonlinear optimization