Abstract
A new algorithm for finding numerical solutions of optimal feedback control based on dynamic programming is developed. The algorithm is based on two observations: (1) the value function of the optimal control problem considered is the viscosity solution of the associated Hamilton-Jacobi-Bellman (HJB) equation and (2) the appearance of the gradient of the value function in the HJB equation is in the form of directional derivative. The algorithm proposes a discretization method for seeking optimal control-trajectory pairs based on a finite-difference scheme in time through solving the HJB equation and state equation. We apply the algorithm to a simple optimal control problem, which can be solved analytically. The consistence of the numerical solution obtained to its analytical counterpart indicates the effectiveness of the algorithm.
Original language | English |
---|---|
Pages (from-to) | 95-104 |
Number of pages | 10 |
Journal | IMA Journal of Mathematical Control and Information |
Volume | 26 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2009 |
Externally published | Yes |
Keywords
- Dynamic programming
- Exponential stability
- Numerical solution
- Optimal feedback control
- Viscosity solution