Special Session 48: Sparse optimization and optimal control in dynamical systems and PDEs
Contents
Sparse optimization usually leads to non-smooth convex minimization problems. Often these problems can be conveniently formulated as convex-concave saddle-point problems. The solution of these saddle point problems is equivalent to the solution of a monotone operator equation. In this talk we augment the well know forward-backward method for monotone inclusions by preconditioning and an acceleration of Nesterov-type. The resulting method is very flexible in that it can tackle non-smooth primal and dual terms as well as exploit smooth terms in both the primal and the dual problem. It is successfully applied to various large scale non-smooth optimization problems.