Differential Equations and Linear Algebra, 6.4: The Matrix Exponential, exp(A*t)
From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
The shortest form of the solution uses the matrix exponential y = eAt y(0). The matrix eAt has eigenvalues eλt and the eigenvectors of A.
Published: 27 Jan 2016
OK. We're still solving systems of differential equations with a matrix A in them.
And now I want to create the exponential. It's just natural to produce e to the A, or e to the A t. The exponential of a matrix. So if we have one equation, small a, then we know the solution is an e to the A t, times the starting value.
Now we have n equations with a matrix A and a vector y. And the solution should be, at time t, e to the A t, times the starting value. It should be a perfect match with this one, where this had a number in the exponent and this has a matrix in the exponent.
OK. No problem. We just use the series for e to the A t. We plug in a matrix instead of a number. So the identity, plus A t, plus 1/2 A t squared, plus 1/6 of A t cubed, forever. It's the same. It's the exponential series. The most important series in mathematics, I think.
And it gives us an answer. And that answer is a matrix. Everything here, every term, is a matrix.
OK. Now, is that the right answer? We check that by putting it into the differential equation. So I want to put that solution into the equation. So I need to take the derivative.
The derivative of this is the derivative of-- that's a constant. The derivative of that is A. The derivative of this is 1/2. I have an A squared, and I have a t squared. The derivative of t squared is 2t, so that'll just be a t. The 2 and the 2 cancel.
OK. Now I have A cubed here. t cubed? The derivative of t cubed is 3t squared, so I have a t squared. And the 3 cancels the 3 and the 6, and leaves 1 over 2 factorial, and so on.
And I look at that. And I say it's very much like the one above. Look. This series is just A times this one. Multiply the top one by A. A times I is A. A times A t is A squared t. Term by term, it just has a factor A.
So it's A e to the A t, is the derivative of my matrix exponential. It brings down an A. Just what we want. Just what we want.
So then if I add a y of 0 in here, that's just a constant vector. I'll have a y of 0. I'll have a y of 0 here. When I put this into the differential equation, it works. It works.
Now, is it better than what we had before, which was using eigenvalues and eigenvectors? It's better in one way. This exponential, this series, is totally fine whether we have n independent eigenvectors or not. We could have repeated eigenvalues.
I'll do an example. So for with repeated eigenvalues and missing eigenvectors, e to the A t is still the correct answer. Still the correct answer. But if we want to use eigenvalues and eigenvectors to compute e to the A t, because we don't want to add up an infinite series very often, then we would want n independent eigenvectors.
So what am I saying? I'm saying that this e to the A t-- All right, suppose we have n independent eigenvectors. And we know that that means, in that case, a is V times lambda times V inverse. And we can write V inverse because the matrix V has the eigenvectors.
This is the eigenvector matrix. If I have n independent eigenvectors, that matrix is invertible. I have that nice formula. And now I can see what is-- e to the A t is always identity plus A.
I'm now going to use the diagonalization, the eigenvectors, and the eigenvalues for A. So I'm doing the good case now, when there are a full set of independent eigenvectors. Then the A t is V lambda V inverse t. That's right, that's I, plus A t, plus 1/2 A t squared. Right?
So I need A squared. So everybody remembers what A squared is. A squared is V lambda V inverse, times V lambda V inverse. And those cancel out to give V lambda squared V inverse, times t squared, and so on.
You remember this A squared, so I'll take that away. And look at what I've got. Look what I've got it. Yes. Factor V out of the start, and factor V inverse out of the end.
And in here I have V times V inverse is I, so that's fine. V times V inverse, I have a lambda t. V and a V inverse, so I have a 1/2 half lambda squared t squared. And so on, times V inverse.
This is all just what we hope for. We expect that a V goes out at the far left at the front. This V inverse comes out at the far right. And what do you see in the middle?
You see-- so this is now my formula for e to the A t, is V. And what do I have there? I have the exponential series for lambda t. So it's e to the lambda t V inverse.
And what is e to the lambda t? Let's just understand the matrix exponential. When the matrix is diagonal, the best possible matrix, this will be V. What does my matrix look like? V inverse.
If I'm looking at this, looking at this. Lambda is diagonal. All these matrices are diagonal with lambdas. So that'll be e to the lambda 1t down to e to the lambda nt.
I'm not doing anything brilliant here. I'm just using the standard diagonalization to produce our exponential from the eigenvector matrix and from the eigenvalues. So I'm just taking the exponentials of the n different eigenvalues.
So e to the A t. This would lead to e to the A t y at 0, would be-- y of 0 is some combination. And then there's an e to the lambda 1t coming from here. And there's an x eigenvector x1, plus C2 e to the lambda 2t x2, so on.
That's the solution that we had last time. That's the solution that using eigenvalues and eigenvectors.
Now. Can Can I get something new here? Something new will be, suppose there are not a full set of n independent eigenvectors. e to the A t is still OK. But this formula is no good. That formula depends on V and V inverse.
And suppose we have an example. So all that is very nice. That's what we expect.
But we could have a matrix like this one. A equals-- well, here's an extreme case.
What are the eigenvalues of that matrix? It's a diagonal matrix. The eigenvalues are 0 and 0. The eigenvalue of 0 is repeated. It's a double eigenvalue.
And we hope for two eigenvectors, but we don't find them. That has only one line of eigenvectors. It only has an x1 equals 1, 0, I think. If I multiply that A, times that x1, gives me 0 times x1. That's an eigenvector.
Well, because the eigenvalue is 0, I'm looking for the null space. There is in the null space, but the null space is only one-dimensional. Only one eigenvector. Missing an eigenvector.
Still, still, I can do e to the A t. That's still completely correct. That series will work.
So to do this series I need to know a squared. So I'm actually going to use the series, but you'll see that it cuts off very fast. a squared, if you work that out, it's all 0's.
So our e to the A t is just I, plus A t, plus STOP. A squared is all 0's. A cubed is all 0's. So the matrix e to the A t is identity, a times t. a is this, times t is going to put a t there.
There you go. That's a case of the matrix exponential, which would lead us to the solution of the equations. Of course, it's a pretty simple exponential.
But it comes from pretty simple equations. The equations dy dt, that system of two equations, with that matrix in it. Our system of equations is just dy1 dt, I have a 1 there so it would be a y2. And dy2 dt is 0 on the second row.
Well, that's pretty easy to solve. In fact, this tells you how to solve-- you could naturally ask the question, how do we solve differential equations when the matrix doesn't have n eigenvectors?
Here's an example. This matrix has only one eigenvector. But the equation that we just solved by, you could say, back substitution. This gives Y2 equal constant. And then that equation, dy1 dt equal that constant, gives me y1 equals t times constant. That's what I'm seeing.
Oh. Yeah. Are you surprised to see a t show up here? Normally I don't see a t in matrix exponentials. But in this repeated case, that's the t that we're always seeing when we have repeated solutions.
Everybody remembers that when we have second-order equations, and we have the two exponents are the same. So we only get one solution of that, e to the st. And we have to look for another one. And that other one is? te to the st. It's that same t there.
OK. There is an example of how a matrix with a missing eigenvector, the exponential pops a t in. The exponential pops a t in.
And if I had two missing eigenvectors, then in the exponential. Shall I just show you an example with two missing eigenvectors?
Let a be-- well, here it would be 0, 0, 0, 0, 0, triple 0, with, let's say. There's a matrix with three 0 eigenvalues, but only one eigenvector. So it's missing two eigenvectors. And I would, in the end, in e to the A t here, I would see probably 1, 1, 1, t, t, and probably I'll see a 1/2 t squared there.
A little bit like that. But one step worse. Because the triple eigenvalue, well, that's not going to happen very often in reality. But we see what it produces. It produces a t squared as well as the t's.
OK. So, the x matrix exponential gives a beautiful, concise, short formula for the solution. And it gives a formula that's correct, even in the case of missing eigenvectors.
Thank you.