Optimal control of conditioned processes
In this talk, we consider an optimal stochastic control problem in which the cost is conditioned on the fact that the process does not exit a domain. This model was first introduced by P.-L. Lions in his lectures at Collège de France. When the optimization is done over controls of feedback type (i.e., depending only on the state of the process), the optimal solution can be characterized by a system of two partial differential equations of mean field type: a forward (Fokker-Planck-Kolmogorov) equation and a backward (Hamilton-Jacobi-Bellman) equation, both with Dirichlet boundary conditions. They describe respectively the evolution of the distribution and of the value function. We also consider a problem arising in the long time asymptotics. This is a control problem driven by the principal eigenvalue problem associated with a Fokker-Planck equation with Dirichlet condition. We study in details particular aspects of the theory and present numerical results. If time permits, the case of open loop controls will also be discussed. This is based on joint works with Yves Achdou (University Paris-Diderot) and René Carmona (Princeton University).