Inspired by recent advances in machine learning, and in particular by the concept of learning an optimizer, we investigate a class of proximal primal-dual optimizers with a fixed amount of memory. We derive convergence criteria and find several sub-classes corresponding to classical optimization methods such as Chambolle-Pock and Douglas-Rachford methods. Finally, we discuss the choice of algorithm instances for concrete problems.
This presentation is part of Contributed Presentation “CP1 - Contributed session 1”