Special Session 182: Recent developments on mathematical finance, stochastic control and related topics

Policy Iteration Achieves Regularized Equilibrium under Time Inconsistency
Xiang Yu
The Hong Kong Polytechnic University
Hong Kong
Co-Author(s):    Yu-Jui Huang, Keyu Zhang
Abstract:
For a general entropy-regularized time-inconsistent stochastic control problem, we propose a policy iteration algorithm (PIA) and establish its convergence to an equilibrium policy with an exponential convergence rate. The design of the PIA is based on a coupled system of non-local partial differential equations, called the exploratory equilibrium Hamilton--Jacobi--Bellman (EEHJB) equation. As opposed to the standard time-consistent case, policy improvement fails in general and the target value function (now an equilibrium value function) is not even known to exist a priori. To overcome these, we prove that the value functions generated by the PIA form a Cauchy sequence in a specialized Banach space, hence admit a limit, and the rate of convergence is exponential, on the strength of the Bismut--Elworthy--Li formula of stochastic representation. The limiting value function is shown to fulfill the EEHJB equation, which induces an equilibrium policy in a Gibbs form. Such convergence in value additionally implies uniform convergence of the generated policies to the equilibrium policy, again with an exponential rate. As a byproduct, the PIA gives a constructive proof of the global existence and uniqueness of a classical solution to our general EEHJB equation, whose well-posedness has not been explored in the literature.