Pontryagin’s Minumum Principle#
Equivalently, “Pontryagin’s Maximum Pricincple”
Motivation#
How do we solve an optimal control problem? Let’s begin by finding what conditions determine if a solution is optimal! This is Pontryagin’s Minimum Principle.
This principle provides a set of first order necessary conditions for deterministic optimal control problems. In discrete-time, these are a special case of the KKT conditions.
Theory#
Given our objective
We can form the lagrangian
Introducing the “Hamiltonian” (\(H\)) we have
Taking the derivatives with respect to \(x\) and \(\lambda\); and an explicit minimisation for \(u\)
In concise notation
They are almost identical in continuous time
Practical#
Historically many algorithms were base on integrating th continuous ODEs foreward/backward to do gradient descent on \(u(t)\).
These are called “indirect” or “shooting” methods.
In continuous time \(\lambda(t)\) is called the costate trajectory.
These methods have largely fallen out of favour as computers have gotten faster.
Intuition#
Great now we have some recursive conditions that describe the solution to an optimal control problem. But how do we leverage those to solve the problem?
The next chapter will demonstrate the solution process on an LQR problem formulation.