What Is Linearization?
From the series: Trimming and Linearization
Brian Douglas
Why go through the trouble of linearizing a model? To paraphrase Richard Feynman, it’s because we know how to solve linear systems. With a linear model we can more easily design a controller, assess stability, and understand the system dynamics.
This video introduces the concept of linearization and covers some of the topics that will help you understand how linearization is used and why it’s helpful.
This video also describes operating points and the process of trimming your system to make an operating point an equilibrium.
To end, we walk through an example of Jacobian linearization by looking at the first order partial derivatives of a system.
Published: 4 Dec 2018
In this video I am going to introduce the concept of linearization and cover some of the topics that will help you understand how linearization is used and why it’s helpful. Consider a physical system that is modeled as a differential equation in form xdot is a function of x and u. What this means is that how the system changes over time, depends on the current state of the system, and the external inputs into the system, this could be external forces, torques, energy, and so on. Often this representation is nonlinear since all real systems are nonlinear in nature. The question is, can we find a suitable linear, time-invariant combination of system states and external inputs that produce similar results as the nonlinear dynamics in some limited sense? That is, can we fit the model that is a nonlinear function of x and u to the linear form xdot = Ax + Bu and what are the implications of doing so? Well, let’s talk about that. I’m Brian, and welcome to a MATLAB Tech Talk.
To begin, let’s look at a MATLAB example of a water-tank system. Water is pumped into the tank through an opening at the top at a rate proportional to the voltage that drives the pump. And there is a drain at the bottom where water can leave the tank. The height of the water in the tank changes and can be affected by controlling the voltage to the pump. If you open a MATLAB command window and type "watertank", the following Simulink model will pop up. This is a classic feedback system; there’s a reference water height, a controller, and a plant that represents the dynamics of the water tank system, the input is voltage, and the output is water height. Within the plant subsystem we have the details of how it is modeled. And from this we can write out the differential equation for this system. The parameters, a, b, and capital A are specific parameters of the tank and they represent constants related to the flow rate into and out of the tank and the cross-sectional area. So what we have is the change in height of the water equals the amount of height gained by the water entering the tank minus the height lost by water leaving. And the amount lost is proportional to the square root of the water height. This square root makes it a nonlinear model. So at this point, we could linearize this model and fit it to the form x dot = Ax + Bu.
So the question might be, why go through the trouble to get a linear model? Why not just stay with the more accurate nonlinear model? There are at least three good reasons. One, we can more easily check local stability and understand the dynamics of a linear system than we can a nonlinear system. Two, we have a bunch of tools that we can use to design a controller for linear models. And three, you can speed up your simulation by replacing the nonlinear model with a linear one. This might be important if you are running a hardware-in-the-loop test where your controller is running on the target hardware, but it’s interacting with a simulation of the rest of the system dynamics. In some cases, the nonlinear math involved in running that simulation can’t be solved fast enough to keep up with the real-time hardware, so replacing it with a faster, linear equivalent is a good option. So let’s get into the process of linearization.
At any moment in time, a dynamical system has a state--that is, a particular configuration of system variables that define the condition of the system (position, velocity, voltage, and so on). The set of all possible configurations that a system can be in is called the state space. This is every single orientation, motion, and condition that a system can experience. For example, take a pendulum that has two states: angle theta, and angular rate, theta dot. The state space would stretch from 0 to 2 PI for theta to cover the entire circle and negative to positive infinity for theta dot to cover all possible rates at which it could spin. Of course, realistically, your pendulum might never exceed something like 10 rad/s so we can chop it down to this much smaller operating state space, or the operating envelope. It would be ideal if we could just get a linear model that worked well over the entire state space. But that usually isn’t the case because a single linear model might represent the dynamics well at some states, but due to the nonlinearity of the system, it would be a poor representation at others. So to work around this, we linearize at specific locations called operating points where we want the lowest error. An important and very common type of operating point is when the system is at steady state, or equilibrium. This means that if you initialized the system at this state, then the states would not change over time. Or, another way of putting it: x dot for all time equals zero. That might mean that the system states are at an equilibrium on their own with no external inputs, like a pendulum that has two equilibrium points, one hanging straight down and one perfectly balanced pointing straight up. But we also have inputs into the system and the combination of inputs and system states can also be an equilibrium. For example, if the input into the pendulum is a torque, then we can find a constant torque that moves the equilibrium point to pi/4 radians. If we initialized the pendulum in this condition, it would not move since the torque from gravity and the input torque perfectly balance out and we can linearize around this condition. The act of finding an equilibrium by adjusting the input signals is called trimming. This is a term that is borrowed from aerospace, like trimming an aircraft to fly straight and level.
Imagine an aircraft that is set up to hold a constant altitude when flying at a zero-degree angle of attack at a given airspeed with no control input from the elevator. This is a steady state condition for this system. However, if the aircraft started slowing down, or was flying at an airspeed lower than the steady state condition, the wings would generate less lift and the altitude would start to drop. Altitude could still be held constant at this new speed if the pilot added an input into the system by pulling back on the yoke to deflect the elevator up, raising the angle of attack and generating more lift. But rather than having to constantly pull back on the yoke, the pilot could trim the elevator so that the neutral yoke position produced the needed elevator position that held the altitude constant. This is what it means to trim: Find the combination of the system states and inputs that produce a steady state situation at your chosen operating points.
You can imagine trimming can be a complicated task if your system has dozens of states you’re trying to hold constant and several control inputs that you can use to control them. In the next video, we’ll talk about some of the tools available within Simulink that make trimming easy. For right now, though, it’s enough to understand the concept of trimming and how it relates to producing a steady state operating point.
Let me go off on a real quick aside. It’s not always possible to trim to steady state at any arbitrary point in the state space. For example, there is no combination of inputs that will cause an aircraft to fly backwards steady state even though the state space allows for that motion. So you have to understand your system well enough that you don’t pick a combination of states that you can’t find an equilibrium for.
Okay, back to it. We’ve picked our operating point and trimmed the system so that it’s at equilibrium. The next step is to linearize at that operating point.
To show how we do this, let’s walk through two quick examples. I grabbed this first example from Wikipedia, but I want to retell it here and embellish it a bit because I think it’ll give you a basic idea behind linearization and how the math gets simplified.
We’ll look at the equation f(x) equals the square root of x. Now, this isn’t a differential equation, but it’ll still help to get the points across.
The square root of x is a nonlinear function that looks something like this. And we may ask, what is the value of this function at x = 4? This is pretty easy to solve and we get 2. However, a harder problem would be, what is the value at x=4.001? It would probably take a fair amount of time to solve this by hand. However, we can simplify it by estimating the value using a linear equation. The idea is that as long as we don’t stray too far from the operating point, then the error between the linear estimate and the true value will be small. Okay, so let’s do that. The equation for a line is y = mx + b where m is the slope and b is the offset.
The slope at any point along the function is just the derivative of the function and the offset is the value of the function at the operating point. So our linearized equation y(x) would be the slope at the operating point x bar times the distance away from the operating point plus the offset. This is a linearization around x bar and, in our case, x bar is 4. This gives the linear equation 1/4 * x - 4 + 2.
Now we can check the value at x = 4.001 and easily do the math to get 2.00025. The real function evaluated at 4.001 is 2.00024998. That’s less than a millionth of a percent. And while it felt like a lot of steps to linearize, we can now use this equation to estimate 4.1 or 3.9 or any number near the operating point quickly and with very little error.
However, if we try to use our linear equation at x = 2, then the error is over half a percent. So if that error is more than we can handle, we may choose to also linearize at the operating point 2 and then hand off between the two linear models as x changes between 2 and 4.
Even though this example was simple, it ties in well with our tank example that we started with. Remember Hdot, which is a function of state H and input V, was given as this nonlinear equation. So let’s linearize it. First we choose an operating point and I’ll stick with H bar = 4 to make it similar to the last problem. Now we can trim the system so that H dot = 0 by setting H to the operating point and solving for the input. And we get V bar is 2a over b. With these values, the function evaluated at the operating point equals 0.
This system only has one state, height, and one input, voltage. So we can expect the final linear equation to be in this form, where both matrices are 1 by 1. All right, now that we have a trimmed system at the operating point, we can now linearize it. I want to remind you that what we did before was replace the function square root of X with a linear line, y = mx + b. We’re going to do something a bit different here, but the result will be very similar. We’re going to expand our differential equation with a Taylor series but instead of keeping all of the terms out to infinity, we’re going to ignore all of the higher order terms. In this way we’ll only keep the zeroth and first order terms, or the first derivatives of the Taylor series.
Now, the math involved in this is more than I want to cover in this video, but my goal is really just to give you an idea of how the derivative of a function can be used to generate a linear approximation. If you work through the Taylor series expansion and remove terms, you’re left with the following function.
And look at this. It bears a striking resemblance to y = mx + b. We have the offset, or the starting point of the function, which is just the value of Hdot at the operating point. And then added to that is the slope of the function as it relates to the changing state multiplied by the distance the state is away from the operating point. So this first addition takes care of the changes due to changing state, and the second addition takes care of the changes due to changing input.
But we can simplify this further since the function’s value at the operating point is zero. Now we can solve the partial derivatives. To simplify the result even further, we can relabel where we call zero. So instead of the height being measure from the bottom of the tank, for example, we can measure it from the operating point. This condenses H - Hbar down to just H and V - Vbar down to V.
And what we’re left with is the linear equation of our water tank system at the operating point 4. And just like with the previous example, as we stray from this operating point, this linear model will behave less like its nonlinear counterpart. If we can’t handle the error, then we’d linearize a second time at a new operating point.
When you have multiple linear models, you can develop a linear controller for each of them and hand off the control gains from one region to the next. This is the idea behind Gain Scheduling, and I’ve linked a video in the description on this topic if you’re interested in learning more. There is also a link to the watertank model linearization example and MathWorks page to learn more about linearization of Simulink models, so I hope you check them out as well.
What we did here was Jacobian linearization. We built a linear model by looking at the first order derivatives of the system. This type of linearization requires the function be differentiable at the operating point. But there are other linearization methods as well, and in the next video we will explore some of those other methods, what tools exist for them, and how to understand what they are doing under the hood.
If you don’t want to miss the next Tech Talk video, don’t forget to subscribe to this channel. Also, if you want to check out my channel, Control System Lectures, I cover more control theory topics there as well. Thanks for watching. I’ll see you next time.