Computational & Technology Resources
an online resource for computational,
engineering & technology publications
Computational Science, Engineering & Technology Series
ISSN 1759-3158
Edited by: B.H.V. Topping
Chapter 9

Stochastic Optimal Open-Loop Feedback Control

K. Marti

Aerospace Engineering and Technology, Federal Armed Forces University Munich, Neubiberg/Munich, Germany

Full Bibliographic Reference for this chapter
K. Marti, "Stochastic Optimal Open-Loop Feedback Control", in B.H.V. Topping, (Editor), "Computational Methods for Engineering Science", Saxe-Coburg Publications, Stirlingshire, UK, Chapter 9, pp 211-235, 2012. doi:10.4203/csets.30.9
Keywords: optimal regulators under stochastic uncertainty, stochastic Hamiltonian, H-minimal control, stochastic optimal open-loop control, stochastic optimal open-loop feedback control, two-point boundary value problem with random parameters, numerical solution of the two-point boundary value problem, reduction of the boundary value problem to a fixed point condition, discretization of the fixed point condition.

The aim of this chapter is to determine (approximate) feedback control laws for control systems taken under stochastic uncertainty. Stochastic parameter variations are always present due to the following types of stochastic uncertainties: physical uncertainty (variability of physical quantities, like material, loads, dimensions, etc.); manufacturing uncertainty (manufacturing errors, tolerances, etc.); economic uncertainty (costs, trade, demand, etc.); statistical uncertainty (e.g. estimation errors due to limited a priori and, or sample data); model and environmental uncertainty uncertainty (e.g. initial and terminal conditions, model errors or model inaccuracies).

In order to obtain the most insensitive optimal controls with respect to random parameter variations, hence, robust optimal controls, cf. [1], the problem is modeled in the framework of optimal control under stochastic uncertainty: Minimize the expected total costs arising along the trajectory, at the terminal time or point and from the control input subject to the dynamic equation and possible control constraints. As is well known, e.g., from model predictive control [2], optimal feedback controls can be approximated very efficiently by optimal open-loop feedback controls. Optimal open-loop feedback controls are based on a certain family of optimal open-loop controls. Hence, for practical purposes it is sufficient to determine optimal open-loop controls only. Extending this construction, stochastic optimal open-loop feedback control laws are constructed by taking into account the random parameter variations of the control system. Thus, stochastic optimal open-loop feedback controls are obtained by first computing stochastic optimal open-loop controls on the "remaining" time intervals. Then by evaluating these controls at the corresponding intermediate starting time points only, a stochastic optimal open-loop feedback control law is obtained.

For the computation of stochastic optimal open-loop controls at each intermediate starting time point, the stochastic Hamilton function H of the optimal control problem under stochastic uncertainty is introduced. Then an H-minimal control law can be determined by solving a finite-dimensional stochastic optimization problem [3] for minimizing the conditional expectation of the stochastic Hamiltonian subject to the remaining deterministic control constraints at the current time point. Having a H-minimal control, the related two-point boundary value problem with random parameters is formulated for the computation of the stochastic optimal state and adjoint state trajectory. In the case of a linear-quadratic control problem, which arises often in engineering practice, the state and adjoint state trajectory can be determined analytically to a large extent. Inserting then these trajectories into the H-minimal control, stochastic optimal open-loop controls are found.

For solving the two-point boundary value problem (BVP) with random parameters, instead of using the matrix Riccati differential equation, the (BVP) is reduced to a fixed point equation for the conditional mean adjoint trajectory. Moreover, approximating the integrals occurring in this fixed point condition, approximate systems of linear equations are obtained for the values of the conditional mean adjoint trajectory needed at the intermediate starting time points. Thus, the calculations can be done in real time.

G.E. Dullerud, F. Paganini, "A Course in Robust Control Theory - A convex approach", Springer-Verlag, New York, 2000.
J. Richalet, A. Rault, J.L. Testud, J. Papon, "Model Predictive Heuristic Control: Applications to Industrial Processes", Automatica, 14, 413-428, 1978.
K. Marti, "Stochastic Optimization Problems", 2nd edition, Springer-Verlag, Berlin-Heidelberg-New York, 2008.

purchase the full-text of this chapter (price £20)

go to the previous chapter
go to the next chapter
return to the table of contents
return to the book description
purchase this book (price £95 +P&P)