![cover image](https://wikiwandv2-19431.kxcdn.com/_next/image?url=https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Optimal_Control_Luus.png/640px-Optimal_Control_Luus.png&w=640&q=50)
Optimal control
Mathematical way of attaining a desired output from a dynamic system / From Wikipedia, the free encyclopedia
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized.[1] It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure.[2] Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.[3] A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.[4][5]
![Thumb image](http://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Optimal_Control_Luus.png/640px-Optimal_Control_Luus.png)
Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies.[6] The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane.[7] Optimal control can be seen as a control strategy in control theory.[1]