Special Session 16: Recent Development of Stochastic Optimal Control and Differential Games

Optimization Methods Based on Optimal Control

Huanshui Zhang
Shandong University of Science and Technology
Peoples Rep of China
Co-Author(s):    Hongxia Wang, Yeming Xu, Ziyuan Guo
Abstract:
Optimization is critical in Applied Mathematics. It is also a scientific basis for engineering and information fields. The development of optimization theory spans hundreds of years, featuring classic algorithms such as gradient descent, improved gradient descent, Newton`s iteration, and enhanced quasi-Newton methods. While these algorithms have acknowledged strengths, they also have limitations: gradient descent is stable but typically suffers from slow convergence, whereas Newton`s iteration converges quickly but can easily diverge, with similar issues also present in their improved versions. This report introduces a novel optimization algorithm that is both fast-converging and stable, with its core idea rooted in optimal control theory. The update size of the iterative algorithm is treated as a control input, designed to minimize the sum of the optimized function and the control energy at future moments. Minimizing the optimized function ensures the fastest convergence while minimizing the control energy guarantees the algorithm`s stability. By applying Taylor expansion for linearization, the algorithm is further refined into an iterative form, thus avoiding the complexities of solving nonlinear forward-backward difference equations. The new algorithm is rigorously shown to achieve super-linear convergence similar to Newton`s iteration, along with the stability characteristic of gradient descent. Moreover, this algorithm can recover gradient descent, Newton`s iteration, as well as improved accelerated gradient descent and regularized Newton methods, providing the first theoretical foundation for the scientific validity of both gradient descent and Newton`s iteration.