摘要
A new algorithm for finding numerical solutions of optimal feedback control based on dynamic programming is developed. The algorithm is based on two observations: (1) the value function of the optimal control problem considered is the viscosity solution of the associated Hamilton-Jacobi-Bellman (HJB) equation and (2) the appearance of the gradient of the value function in the HJB equation is in the form of directional derivative. The algorithm proposes a discretization method for seeking optimal control-trajectory pairs based on a finite-difference scheme in time through solving the HJB equation and state equation. We apply the algorithm to a simple optimal control problem, which can be solved analytically. The consistence of the numerical solution obtained to its analytical counterpart indicates the effectiveness of the algorithm.
源语言 | 英语 |
---|---|
页(从-至) | 95-104 |
页数 | 10 |
期刊 | IMA Journal of Mathematical Control and Information |
卷 | 26 |
期 | 1 |
DOI | |
出版状态 | 已出版 - 2009 |
已对外发布 | 是 |