A number of our pages use **cookies** to identify you when you sign-in to our site.

The cookie-related information is fully under our control. These cookies are not used for any purpose other than those described here. Unibo policy

By continuing to browse the site, or by clicking on “close”, you consent to the use of cookies.

Many machine learning and signal processing problems are traditionally cast as convex optimization problems where the objective function is a sum of many simple terms. In this situation, batch algorithms compute gradients of the objective function by summing all individual gradients at every iteration and exhibit a linear convergence rate for strongly-convex problems. Stochastic methods, rather, select a single function at random at every iteration, classically leading to cheaper iterations but with a convergence rate which decays only as the inverse of the number of iterations. In this talk, I will present the stochastic averaged gradient (SAG) algorithm which is dedicated to minimizing finite sums of smooth functions; it has a linear convergence rate for strongly-convex problems, but with an iteration cost similar to stochastic gradient descent, thus leading to faster convergence for machine learning and signal processing problems. I will also mention several extensions, in particular to saddle-point problems, showing that this new class of incremental algorithms applies more generally.

**Chair:** Lars Ruthotto (Department of Mathematics and Computer Science, Emory University)

The slides are available here

Fri 08 June at 08:15 Room A (Palazzina A - Building A, floor 0)