[Notes on Diffy Qs home] [PDF version] [Buy paperback on Amazon]

[next] [prev] [prev-tail] [tail] [up]

Note: 2 lectures, §5.5 in [EP], §7.7 in [BD]

In this section we present a diﬀerent way of ﬁnding the fundamental matrix solution of a system. Suppose that we have the constant coeﬃcient equation

as usual. Now suppose that this was one equation ( is a number or a matrix). Then the solution to this would be

The same computation works for matrices when we deﬁne properly. First let us write down the Taylor series for for some number :

Recall is the factorial, and . We diﬀerentiate this series term by term

Maybe we can try the same trick with matrices. Suppose that for an matrix we deﬁne the matrix exponential as

Let us not worry about convergence. The series really does always converge. We usually write as by convention when is a matrix. With this small change and by the exact same calculation as above we have that

Now and hence is an matrix. What we are looking for is a vector. In the case we would at this point multiply by an arbitrary constant to get the general solution. In the matrix case we multiply by a column vector .

Theorem 3.8.1. Let be an matrix. Then the general solution to is

where is an arbitrary constant vector. In fact .

Let us check:

Hence is the fundamental matrix solution of the homogeneous system. If we ﬁnd a way to compute the matrix exponential, we will have another method of solving constant coeﬃcient homogeneous systems. It also makes it easy to solve for initial conditions. To solve , , we take the solution

This equation follows because , so .

We mention a drawback of matrix exponentials. In general . The trouble is that matrices do not commute, that is, in general . If you try to prove using the Taylor series, you will see why the lack of commutativity becomes a problem. However, it is still true that if , that is, if and commute, then . We will ﬁnd this fact useful. Let us restate this as a theorem to make a point.

In some instances it may work to just plug into the series deﬁnition. Suppose the matrix is diagonal. For example, . Then

and

So by this rationale we have that

This makes exponentials of certain other matrices easy to compute. Notice for example that the matrix can be written as where . Notice that . So for all . Therefore, . Suppose we actually want to compute . The matrices and commute (exercise: check this) and , since . We write

Exercise 3.8.1: Suppose that is and is the only eigenvalue. Show that , and therefore that we can write , where . Hint: First write down what does it mean for the eigenvalue to be of multiplicity 2. You will get an equation for the entries. Now compute the square of .

Matrices such that for some are called nilpotent. Computation of the matrix exponential for nilpotent matrices is easy by just writing down the ﬁrst terms of the Taylor series.

In general, the exponential is not as easy to compute as above. We usually cannot write a matrix as a sum of commuting matrices where the exponential is simple for each one. But fear not, it is still not too diﬃcult provided we can ﬁnd enough eigenvectors. First we need the following interesting result about matrix exponentials. For two square matrices and , with invertible, we have

This can be seen by writing down the Taylor series. First

And by the same reasoning . Now write down the Taylor series for :

Given a square matrix , we can sometimes write , where is diagonal and invertible. This procedure is called diagonalization. If we can do that, the computation of the exponential becomes easy. Adding into the mix, we can then easily compute the exponential

To diagonalize we will need linearly independent eigenvectors of . Otherwise this method of computing the exponential does not work and we need to be trickier, but we will not get into such details. We let be the matrix with the eigenvectors as columns. Let , , …, be the eigenvalues and let , , …, be the eigenvectors, then . Let be the diagonal matrix with the eigenvalues on the main diagonal. That is

We compute

The columns of are linearly independent as these are linearly independent eigenvectors of . Hence is invertible. Since , we multiply on the right by and we get

This means that . Multiplying the matrix by we obtain

(3.4) |

The formula (3.4), therefore, gives the formula for computing the fundamental matrix solution for the system , in the case where we have linearly independent eigenvectors.

Notice that this computation still works when the eigenvalues and eigenvectors are complex, though then you will have to compute with complex numbers. It is clear from the deﬁnition that if is real, then is real. So you will only need complex numbers in the computation and you may need to apply Euler’s formula to simplify the result. If simpliﬁed properly the ﬁnal matrix will not have any complex numbers in it.

Example 3.8.1: Compute the fundamental matrix solution using the matrix exponentials for the system

Then compute the particular solution for the initial conditions and .

Let be the coeﬃcient matrix . We ﬁrst compute (exercise) that the eigenvalues are 3 and and corresponding eigenvectors are and . Hence we write

The initial conditions are and . Hence, by the property that we ﬁnd that the particular solution we are looking for is where is . Then the particular solution we are looking for is

We note that if you can compute the fundamental matrix solution in a diﬀerent way, you can use this to ﬁnd the matrix exponential . The fundamental matrix solution of a system of ODEs is not unique. The exponential is the fundamental matrix solution with the property that for we get the identity matrix. So we must ﬁnd the right fundamental matrix solution. Let be any fundamental matrix solution to . Then we claim

Clearly, if we plug into we get the identity. We can multiply a fundamental matrix solution on the right by any constant invertible matrix and we still get a fundamental matrix solution. All we are doing is changing what the arbitrary constants are in the general solution .

If you think about it, the computation of any fundamental matrix solution using the eigenvalue method is just as diﬃcult as the computation of . So perhaps we did not gain much by this new tool. However, the Taylor series expansion actually gives us a way to approximate solutions, which the eigenvalue method did not.

The simplest thing we can do is to just compute the series up to a certain number of terms. There are better ways to approximate the exponential^{1}. In many cases however, few terms of the Taylor series give a reasonable approximation for the exponential and may suﬃce for the application. For example, let us compute the ﬁrst 4 terms of the series for the matrix .

And plugging into the real solution (rounded to 8 decimal places) we get

Not bad at all! Although if we take the same approximation for we get

while the real value is (again rounded to 8 decimal places)

So the approximation is not very good once we get up to . To get a good approximation at (say up to 2 decimal places) we would need to go up to the power (exercise).

Exercise 3.8.4: Find a fundamental matrix solution for the system , , . Then ﬁnd the solution that satisﬁes .

Exercise 3.8.7: Use Exercise 3.8.6 to show that . In particular this means that is invertible even if is not.

Exercise 3.8.8: Suppose is a matrix with eigenvalues , , and corresponding eigenvectors , . a) Find matrix with these properties. b) Find the fundamental matrix solution to . c) Solve the system in with initial conditions .

Exercise 3.8.9: Suppose that is an matrix with a repeated eigenvalue of multiplicity . Suppose that there are linearly independent eigenvectors. Show that the matrix is diagonal, in particular . Hint: Use diagonalization and the fact that the identity matrix commutes with every other matrix.

Exercise 3.8.104: Compute the ﬁrst 3 terms (up to the second degree) of the Taylor expansion of where (Write as a single matrix). Then use it to approximate .

^{1}C. Moler and C.F. Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later, SIAM Review 45 (1), 2003, 3–49