A number of our pages use cookies to identify you when you sign-in to our site.
The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy
By continuing to browse the site, or by clicking on “close”, you consent to the use of cookies.
We describe a randomized Newton and randomized quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present a significant computational burden. In our proposed framework, stochasticity is introduced to overcome these computational limitations, and probability distributions that can exploit structure and/or sparsity are considered. Our results show, that randomized Newton iterates, in contrast to randomized quasi-Newton iterates, may not converge to the desired least-squares solution.
This presentation is part of Minisymposium “MS36 - Computational Methods for Large-Scale Machine Learning in Imaging (2 parts)”
organized by: Matthias Chung (Virginia Tech) , Lars Ruthotto (Department of Mathematics and Computer Science, Emory University) .