It’s an elegant collection of mathematics whose application is behind everything from artificial pancreases to cloud infrastructure management to landing rockets on barges.

PID control, as described in the article, is the main result of ‘Classical’ control theory and is probably the most used algorithm in every field of engineering.

However, control theory is a busy field and there are a lot of interesting branches of it that are now being applied to real problems.

One branch is robust control. You’ve designed your PID controller in the article but that assumes you know exactly what to put in each of the boxes - i.e what you know what the dynamics of your system are, or what the disturbances are likely to be. But what if you’re uncertain about any of these things? Robust control lets you design controllers that are guaranteed to be stable (up to some limit) given some (correspondingly bounded) uncertainty. So for example autopilots on the most modern aircraft (like the F35) will try and keep the rudder doing the correct thing even when the wing infront of it has stalled and the airflow over it is turbulent. It’s used in modern space missions too - maybe your satellite responds differently to thruster firings when the solar panels are oriented one way vs another. You don’t want your satellite to start tumbling under any circumstances so you use robust control to guarantee stability even with the uncertainty of the satellites physical response to impulses.

It’s no coincidence aerospace likes robust control - usually the costs of instability are very high, whether financially or often in terms of life, so robustness is highly prized (and it’s a lot of mathematics so there’s a certain threshold before it’s worth considering!).

Another branch is optimal control. How can I design a controller that gets me to the state I want whilst minimising some cost I impose? More concretely, how can my lunar hopper get me from here to that scientifically-interesting site over there for the smallest amount of fuel consumption? This often manifests itself as an onboard computer working constantly to observe its state (radars, lidars, computer vision) and doing an onboard, realtime convex optimisation (i.e. a steepest descent algorithm) to figure out the best way of getting from where it is to where it wants to be. This is obviously very computational so this field (Model Predictive Control is the term of art) is only just starting to see real applications in highly dynamical systems, but it’s been used for many years in the process industry. And what if your problem can’t be convexified so you have to do some hugely expensive online optimisation? An active area of research!

Another exciting branch is adaptive control - can the controller learn it’s dynamics in real time? It’s often compared to robust control because it’s another way of approaching the problem - say our F35 has a bit of its wing shot off, can the autopilot still fly the aircraft? The robust controller would hopefully just still be good enough to keep it stable despite it’s internal model of the dynamics differing from the new reality, whereas the adaptive controller would re-learn the new dyanmics and change the controller to be the right one for a broken-wing aircraft.

I used the word learn! Yes we can play AI and deeplearning too in control theory, but do we want to? If we replace all the analysis that’s gone into the F35 or the Xray CT machine with a neural network and just tell it to do its best, well it might learn how to operate it well, but can we guarantee that it will? This is one of the problems with the applications of NNs - the difficulty of proving that they won’t do something silly.

We could rephrase the problem in control theory language though. A neural network is just a (very!) non-linear dynamical system, and we want to prove its stability for some (bounded) inputs. Does control theory have any tools to help here? Yes! Infact this is another very active area of research. You can use Lyapunov functions to prove the stability of simple NNs, and the application of Lyapunov stability to NNs in general is another very active area of research, which could allow NNs (which make very promising controllers for non-linear control problems) to be applied to safety critical systems.

There are loads of other branches of Control Theory, these are just a few. It’s a fascinating field that solves problems in all engineering disciplines.

Control Theory is a wonderful field!

It’s an elegant collection of mathematics whose application is behind everything from artificial pancreases to cloud infrastructure management to landing rockets on barges.

PID control, as described in the article, is the main result of ‘Classical’ control theory and is probably the most used algorithm in every field of engineering.

However, control theory is a busy field and there are a lot of interesting branches of it that are now being applied to real problems.

One branch is robust control. You’ve designed your PID controller in the article but that assumes you know exactly what to put in each of the boxes - i.e what you know what the dynamics of your system are, or what the disturbances are likely to be. But what if you’re uncertain about any of these things? Robust control lets you design controllers that are guaranteed to be stable (up to some limit) given some (correspondingly bounded) uncertainty. So for example autopilots on the most modern aircraft (like the F35) will try and keep the rudder doing the correct thing even when the wing infront of it has stalled and the airflow over it is turbulent. It’s used in modern space missions too - maybe your satellite responds differently to thruster firings when the solar panels are oriented one way vs another. You don’t want your satellite to start tumbling under any circumstances so you use robust control to guarantee stability even with the uncertainty of the satellites physical response to impulses.

It’s no coincidence aerospace likes robust control - usually the costs of instability are very high, whether financially or often in terms of life, so robustness is highly prized (and it’s a lot of mathematics so there’s a certain threshold before it’s worth considering!).

Another branch is optimal control. How can I design a controller that gets me to the state I want whilst minimising some cost I impose? More concretely, how can my lunar hopper get me from here to that scientifically-interesting site over there for the smallest amount of fuel consumption? This often manifests itself as an onboard computer working constantly to observe its state (radars, lidars, computer vision) and doing an onboard, realtime convex optimisation (i.e. a steepest descent algorithm) to figure out the best way of getting from where it is to where it wants to be. This is obviously very computational so this field (Model Predictive Control is the term of art) is only just starting to see real applications in highly dynamical systems, but it’s been used for many years in the process industry. And what if your problem can’t be convexified so you have to do some hugely expensive online optimisation? An active area of research!

Another exciting branch is adaptive control - can the controller learn it’s dynamics in real time? It’s often compared to robust control because it’s another way of approaching the problem - say our F35 has a bit of its wing shot off, can the autopilot still fly the aircraft? The robust controller would hopefully just still be good enough to keep it stable despite it’s internal model of the dynamics differing from the new reality, whereas the adaptive controller would re-learn the new dyanmics and change the controller to be the right one for a broken-wing aircraft.

I used the word learn! Yes we can play AI and deeplearning too in control theory, but do we want to? If we replace all the analysis that’s gone into the F35 or the Xray CT machine with a neural network and just tell it to do its best, well it might learn how to operate it well, but can we guarantee that it will? This is one of the problems with the applications of NNs - the difficulty of proving that they won’t do something silly.

We could rephrase the problem in control theory language though. A neural network is just a (very!) non-linear dynamical system, and we want to prove its stability for some (bounded) inputs. Does control theory have any tools to help here? Yes! Infact this is another very active area of research. You can use Lyapunov functions to prove the stability of simple NNs, and the application of Lyapunov stability to NNs in general is another very active area of research, which could allow NNs (which make very promising controllers for non-linear control problems) to be applied to safety critical systems.

There are loads of other branches of Control Theory, these are just a few. It’s a fascinating field that solves problems in all engineering disciplines.