Plenary Presentations

Raymond H. Chan

Department of Mathematics, The Chinese University of Hong Kong

Flexible methodology for image segmentation IP1

In this talk, we introduce a SaT (Smoothing and Thresholding) method for multiphase segmentation of images corrupted with different degradations: noise, information loss and blur. At the first stage, a convex variant of the Mumford-Shah model is applied to obtain a smooth image. We show that the model has unique solution under different degradations. In the second stage, we apply clustering and thresholding techniques to find the segmentation. The number of phases is only required in the last stage, so users can modify it without the need of repeating the first stage again. The methodology can be applied to various kind of segmentation problems, including color image segmentation, hyper-spectral image classification, and point cloud segmentation. Experiments demonstrate that our SaT method gives excellent results in terms of segmentation quality and CPU time in comparison with other state-of-the-art methods. Joint work with: X.H. Cai (UCL), M. Nikolova (ENS, Cachan) and T.Y. Zeng (CUHK)

Chair: Omar Ghattas (The University of Texas at Austin)

The slides are available here

  Tue 05 June at 10:00 Aula Magna (Santa Lucia, floor 0)

Anna Michalak

Department of Global Ecology, Carnegie Institution for Science, Stanford

The expanding role of inverse problems in informing climate science and policy IP2

Climate change is driven primarily by anthropogenic emissions of greenhouse gases, chief among them carbon dioxide and methane. The two most fundamental challenges in carbon cycle science are to develop approaches (1) to quantify human emissions of greenhouse gases at scales ranging from the individual to the globe and from hours to decades, and (2) to anticipate how the “natural” (e.g. oceans, land) components of the carbon cycle will act to mitigate or to amplify the impact of human emissions. Developing science that addresses decision maker needs lies at the core of both challenges. Spatiotemporal variability in observations of atmospheric concentrations of greenhouse gases can be used to tackle both challenges, because the atmosphere preserves signatures of emissions and uptake (a.k.a. fluxes) of greenhouse gases at the earth’s surface. Information about these fluxes can be recovered through the solution of an inverse problem by coupling atmospheric observations with a model of atmospheric dynamics. This talk will give an overview of the use of inverse problems in carbon cycle science, as well as discuss methodological challenges associated with a shift from focusing on simple quantification of fluxes to mechanistic attribution of inferred spatiotemporal flux variability.

Chair: Gitta Kutyniok (Technische Universität Berlin)

  Tue 05 June at 11:00 Aula Magna (Santa Lucia, floor 0)

Christoph Schnörr

Institute of Applied Mathematics, University of Heidelberg

Image Segmentation and Understanding: A Challenge for Mathematicians IP3

The tremendous need for the analysis of massive image data sets in many application areas has been mainly promoting pragmatic approaches to imaging analysis during the last years: adopt a computational model with adjustable parameters and predictive power. This development poses a challenge to the mathematical imaging community: (i) shift the focus from low-level problems (like denoising) to mid- and high-level problems of image analysis (a.k.a. image understanding); (ii) devise mathematical approaches and algorithms that advance our understanding of structure detection in image data beyond a set of rules for adjusting the parameters of black-box approaches. The purpose of this talk is to stimulate the corresponding discussion by sketching past and current major trends including own recent work.

Chair: Stacey Levine (Duquesne University)

The slides are available here.

  Wed 06 June at 08:15 Room A (Palazzina A - Building A, floor 0)

Yonina Eldar

Department of EE, Technion, Israel Institute of Technology, Haifa

Fast analog to digital compression for high resolution imaging IP4

The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. However, in many modern applications, the signal bandwidths have increased tremendously, while the acquisition capabilities have not scaled sufficiently fast. Consequently, conversion to digital has become a serious bottleneck. Furthermore, the resulting high rate digital data requires storage, communication and processing at very high rates which is computationally expensive and requires large amounts of power. In the context of medical imaging sampling at high rates often translates to high radiation dosages, increased scanning times, bulky medical devices, and limited resolution. In this talk, we present a framework for sampling and processing a wide class of wideband analog signals at rates far below Nyquist by exploiting signal structure and the processing task and show several demos of real-time sub-Nyquist prototypes. We then consider applications of these ideas to a variety of problems in medical and optical imaging including fast and quantitative MRI, wireless ultrasound, fast Doppler imaging, and correlation based super-resolution in microscopy and ultrasound which combines high spatial resolution with short integration time. We end by discussing several modern methods for structure-based phase retrieval which has applications in several areas of optical imaging.

Chair: Gabriele Steidl (University of Kaiserslautern)

The movie can be found ​here

The slides are available here

  Thu 07 June at 08:15 Room A (Palazzina A - Building A, floor 0)

Francis Bach

Departement d'Informatique de l'Ecole Normale Superieure Centre de Recherche INRIA de Paris

Linearly-convergent stochastic gradient algorithms IP5

Many machine learning and signal processing problems are traditionally cast as convex optimization problems where the objective function is a sum of many simple terms. In this situation, batch algorithms compute gradients of the objective function by summing all individual gradients at every iteration and exhibit a linear convergence rate for strongly-convex problems. Stochastic methods, rather, select a single function at random at every iteration, classically leading to cheaper iterations but with a convergence rate which decays only as the inverse of the number of iterations. In this talk, I will present the stochastic averaged gradient (SAG) algorithm which is dedicated to minimizing finite sums of smooth functions; it has a linear convergence rate for strongly-convex problems, but with an iteration cost similar to stochastic gradient descent, thus leading to faster convergence for machine learning and signal processing problems. I will also mention several extensions, in particular to saddle-point problems, showing that this new class of incremental algorithms applies more generally.

Chair: Lars Ruthotto (Department of Mathematics and Computer Science, Emory University)

The slides are available here

  Fri 08 June at 08:15 Room A (Palazzina A - Building A, floor 0)