Randomized Newton and quasi-Newton methods for large linear least squares problemsMS36

We describe a randomized Newton and randomized quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present a significant computational burden. In our proposed framework, stochasticity is introduced to overcome these computational limitations, and probability distributions that can exploit structure and/or sparsity are considered. Our results show, that randomized Newton iterates, in contrast to randomized quasi-Newton iterates, may not converge to the desired least-squares solution.

This presentation is part of Minisymposium “MS36 - Computational Methods for Large-Scale Machine Learning in Imaging (2 parts)
organized by: Matthias Chung (Virginia Tech) , Lars Ruthotto (Department of Mathematics and Computer Science, Emory University) .

Matthias Chung (Virginia Tech)
Julianne Chung (Virginia Tech)
Luis Tenorio (Colorado School of Mines)
J. Tanner Slagel (Virginia Tech)
David Kozak (Colorado School of Mines)
inverse problems