| Abstract: |
| The talk is about an optimal control problem for a diffusion whose drift and running cost are merely measurable in the state variable. Such low regularity rules out the use of Pontryagin`s maximum principle and also invalidates the standard proof of the Bellman principle of optimality. We address these difficulties by analyzing the associated Hamilton--Jacobi--Bellman (HJB) equation. Using PDE techniques together with a policy iteration scheme, we prove that the HJB equation admits a unique strong solution, and this solution coincides with the value function of the control problem. Based on this identification, we establish a verification theorem and recover the Bellman optimality principle without imposing any additional smoothness assumptions.
We further investigate a mollification scheme depending on a parameter $\varepsilon > 0$. It turns out that the smoothed value functions $V_{\varepsilon}$ may fail to converge to the original value function $V$ as $\varepsilon \to 0$, and we provide an explicit counterexample. To resolve this, we identify a structural condition on the control set. When the control set is countable, convergence $V_{\varepsilon} \to V$ holds locally uniformly. |
|