The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy
In this talk, we introduce a SaT (Smoothing and Thresholding) method for multiphase segmentation of images corrupted with different degradations: noise, information loss and blur. At the first stage, a convex variant of the Mumford-Shah model is applied to obtain a smooth image. We show that the model has unique solution under different degradations. In the second stage, we apply clustering and thresholding techniques to find the segmentation. The number of phases is only required in the last stage, so users can modify it without the need of repeating the first stage again. The methodology can be applied to various kind of segmentation problems, including color image segmentation, hyper-spectral image classification, and point cloud segmentation. Experiments demonstrate that our SaT method gives excellent results in terms of segmentation quality and CPU time in comparison with other state-of-the-art methods. Joint work with: X.H. Cai (UCL), M. Nikolova (ENS, Cachan) and T.Y. Zeng (CUHK)
Climate change is driven primarily by anthropogenic emissions of greenhouse gases, chief among them carbon dioxide and methane. The two most fundamental challenges in carbon cycle science are to develop approaches (1) to quantify human emissions of greenhouse gases at scales ranging from the individual to the globe and from hours to decades, and (2) to anticipate how the “natural” (e.g. oceans, land) components of the carbon cycle will act to mitigate or to amplify the impact of human emissions. Developing science that addresses decision maker needs lies at the core of both challenges. Spatiotemporal variability in observations of atmospheric concentrations of greenhouse gases can be used to tackle both challenges, because the atmosphere preserves signatures of emissions and uptake (a.k.a. fluxes) of greenhouse gases at the earth’s surface. Information about these fluxes can be recovered through the solution of an inverse problem by coupling atmospheric observations with a model of atmospheric dynamics. This talk will give an overview of the use of inverse problems in carbon cycle science, as well as discuss methodological challenges associated with a shift from focusing on simple quantification of fluxes to mechanistic attribution of inferred spatiotemporal flux variability.
Mrs. Clarisse Manjary Mandridake received her PhD in Image and Signal Processing from the University of Bordeaux I, France, for her works on bi-dimensional signal decomposition applied to classification of textured images, working in Laboratoire Automatique Productique et Traitement du Signal, and in close connection with ARIANA Project in INRIA Sophia-Antipolis. She joined the research team of Advestigo for her postdoc year in 2002. As a researcher at Advestigo and later at Hologram Industries (now renamed SURYS), she developed technologies for the representation, indexation and search of images and videos in large scale databases. She is now in charge of the coordination of the research project for the SURYS digital labs and animates the scientist partnerships with University labs. Her expertise covers Applied Mathematics, image characterization, fingerprinting and authentication on various physical supports, from ID documents to smartlabels. More recently, her area of interest is to contribute to technological innovation for use by poor countries or developing countries in order to help them put in place what is called "good governance". It is a sine qua non Condition for any future economic development.
The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. However, in many modern applications, the signal bandwidths have increased tremendously, while the acquisition capabilities have not scaled sufficiently fast. Consequently, conversion to digital has become a serious bottleneck. Furthermore, the resulting high rate digital data requires storage, communication and processing at very high rates which is computationally expensive and requires large amounts of power. In the context of medical imaging sampling at high rates often translates to high radiation dosages, increased scanning times, bulky medical devices, and limited resolution. In this talk, we present a framework for sampling and processing a wide class of wideband analog signals at rates far below Nyquist by exploiting signal structure and the processing task and show several demos of real-time sub-Nyquist prototypes. We then consider applications of these ideas to a variety of problems in medical and optical imaging including fast and quantitative MRI, wireless ultrasound, fast Doppler imaging, and correlation based super-resolution in microscopy and ultrasound which combines high spatial resolution with short integration time. We end by discussing several modern methods for structure-based phase retrieval which has applications in several areas of optical imaging.
The tremendous need for the analysis of massive image data sets in many application areas has been mainly promoting pragmatic approaches to imaging analysis during the last years: adopt a computational model with adjustable parameters and predictive power. This development poses a challenge to the mathematical imaging community: (i) shift the focus from low-level problems (like denoising) to mid- and high-level problems of image analysis (a.k.a. image understanding); (ii) devise mathematical approaches and algorithms that advance our understanding of structure detection in image data beyond a set of rules for adjusting the parameters of black-box approaches. The purpose of this talk is to stimulate the corresponding discussion by sketching past and current major trends including own recent work.
Many machine learning and signal processing problems are traditionally cast as convex optimization problems where the objective function is a sum of many simple terms. In this situation, batch algorithms compute gradients of the objective function by summing all individual gradients at every iteration and exhibit a linear convergence rate for strongly-convex problems. Stochastic methods, rather, select a single function at random at every iteration, classically leading to cheaper iterations but with a convergence rate which decays only as the inverse of the number of iterations. In this talk, I will present the stochastic averaged gradient (SAG) algorithm which is dedicated to minimizing finite sums of smooth functions; it has a linear convergence rate for strongly-convex problems, but with an iteration cost similar to stochastic gradient descent, thus leading to faster convergence for machine learning and signal processing problems. I will also mention several extensions, in particular to saddle-point problems, showing that this new class of incremental algorithms applies more generally.