Optimal control
Mathematical way of attaining a desired output from a dynamic system / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Optimal control theory?
Summarize this article for a 10 year old
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized.[1] It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure.[2] Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.[3] A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.[4][5]
Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies.[6] The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane.[7] Optimal control can be seen as a control strategy in control theory.[1]