摘要:AbstractWe consider state feedback of stochastic dynamic systems. Optimal control of such systems in general is acknowledged to be difficult, and in particular so when there are state constraints. The common way to solve the problem is to numerically solve the corresponding Hamilton-Jacobi-Bellman (HJB) equation and the main difficulty then is that the state constraints translate into infinite boundary conditions. In a series of work we have used transformations of the HJB equation to get around this problem, each having its limitations. The method to solve the most general problem (Rutquist et al., 2014), however, comes at the cost of having to solven2+ 1 partial differential equations (PDEs), wherenis the number of states. Rather than starting from the Hamilton-Jacobi-Bellman equation we now start from the Fokker-Planck equation, and compute the optimal control policy numerically. Then only one PDE needs to be solved and infinite boundary conditions are still avoided. Preliminary testing indicates that this method is not only much faster but also more robust.