Contents |
In this talk we present a globalized semismooth Newton method for solving a class of optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. The approach is based on a prox-type fixed point equation that represents the first order stationarity conditions. In many important situations, including e.g. sparse optimization problems that arise from $l_1$-regularization or tree-/group-sparsity, the corresponding proximity operator can be shown to be semismooth. The method we investigate combines semismooth Newton steps for solving the fixed point equation, a filter, and an embedded basic globally convergent method, such as, e.g., a proximal gradient scheme. We present both global and local convergence results and conclude with numerical examples illustrating the efficiency of the proposed method. |
|