The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy
The field of inverse problems is fertile ground for the development of computational uncertainty quantification methods. This is due to the fact that, on the one hand, inverse problems involve noisy measurements, leading naturally to statistical (and hence uncertainty) estimation problems. On the other hand, inverse problems involve physical models that, upon discretization, are known only up to a high-dimensional vector of parameters, making them computationally challenging. Estimating a high-dimensional parameter vector in a discretized physical model from measurements of model output defines computational inverse problems. Such problems are typically unstable in that the estimates don’t depend continuously on the measurements. Regularization is a technique that provides stability for inverse problems, and in the Bayesian setting, it is synonymous with the choice of the prior probability density function. Once a prior is chosen, the posterior probability density function results, and it is the solution of the inverse problem in the Bayesian setting. The posterior maximizer – known as the MAP estimator – provides a stable estimate of the unknown parameters. However, uncertainty quantification requires that we extract more information from the posterior, which often requires sampling. The posterior density functions that arise in typical inverse problems are high-dimensional, and are often non-Gaussian, making the corresponding sampling problems challenging. In this mini-tutorial, I will begin with a discussion of inverse problems, move on to Bayesian statistics and prior modeling using Markov random fields, and then end with a discussion of some Markov chain Monte Carlo methods for sampling from posterior density functions that arise in inverse problems.
Chair: Julianne Chung (Virginia Tech)
Wed 06 June at 09:30 Room B (Palazzina A - Building A, floor 1)