The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy
A common technique for solving ill-posed inverse problems is to include some sparsity/low-rank constraint, and pose it as a convex optimization problem, as is done e.g. in compressive sensing. The corresponding functional to be minimized often includes an l2 data fidelity term plus a convex term forcing sparsity. However, for many applications a non-convex term would be more suitable, although this is usually discarded since it leads to issues with algorithm convergence, local minima etc. I will introduce a new transform to (partially) convexify non-convex functionals of the above type. In some settings this leads to a convex problem, and in more complicated scenarios we can prove that the modified functional shares minima with the original functional, as well as give concrete conditions implying that a given stationary point is indeed the sought global minima.
This presentation is part of Minisymposium “MS37 - Sparse-based techniques in variational image processing (2 parts)”
organized by: Serena Morigi (Dept. Mathematics, University of Bologna) , Ivan Selesnick (New York University) , Alessandro Lanza (Dept. Mathematics, University of Bologna) .