Skip to main content
\(\require{cancel}\newcommand{\nicefrac}[2]{{{}^{#1}}\!/\!{{}_{#2}}} \newcommand{\unitfrac}[3][\!\!]{#1 \,\, {{}^{#2}}\!/\!{{}_{#3}}} \newcommand{\unit}[2][\!\!]{#1 \,\, #2} \newcommand{\noalign}[1]{} \newcommand{\qed}{\qquad \Box} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section3.8Matrix exponentials

2 lectures, §5.5 in [EP], §7.7 in [BD]

Subsection3.8.1Definition

In this section we present a different way of finding a fundamental matrix solution of a system. Suppose that we have the constant coefficient equation

\begin{equation*} {\vec{x}\,}' = P \vec{x} , \end{equation*}

as usual. Now suppose that this was one equation (\(P\) is a number or a \(1 \times 1\) matrix). Then the solution to this would be

\begin{equation*} \vec{x} = e^{Pt} . \end{equation*}

The same computation works for matrices when we define \(e^{Pt}\) properly. First let us write down the Taylor series for \(e^{at}\) for some number \(a\text{:}\)

\begin{equation*} e^{at} = 1 + at + \frac{{(at)}^2}{2} + \frac{{(at)}^3}{6} + \frac{{(at)}^4}{24} + \cdots = \sum_{k=0}^\infty \frac{{(at)}^k}{k!} . \end{equation*}

Recall \(k! = 1 \cdot 2 \cdot 3 \cdots k\) is the factorial, and \(0! = 1\text{.}\) We differentiate this series term by term

\begin{equation*} \frac{d}{dt} \left(e^{at} \right) = a + a^2 t + \frac{a^3t^2}{2} + \frac{a^4t^3}{6} + \cdots = a \left( 1 + a t + \frac{{(at)}^2}{2} + \frac{{(at)}^3}{6} + \cdots \right) = a e^{at}. \end{equation*}

Maybe we can try the same trick with matrices. Suppose that for an \(n \times n\) matrix \(A\) we define the matrix exponential as

\begin{equation*} \boxed{~~ e^A \overset{\text{def}}{=} I + A + \frac{1}{2} A^2 + \frac{1}{6} A^3 + \cdots + \frac{1}{k!} A^k + \cdots ~~} \end{equation*}

Let us not worry about convergence. The series really does always converge. We usually write \(Pt\) as \(tP\) by convention when \(P\) is a matrix. With this small change and by the exact same calculation as above we have that

\begin{equation*} \frac{d}{dt} \left(e^{tP} \right) = P e^{tP} . \end{equation*}

Now \(P\) and hence \(e^{tP}\) is an \(n \times n\) matrix. What we are looking for is a vector. In the \(1 \times 1\) case we would at this point multiply by an arbitrary constant to get the general solution. In the matrix case we multiply by a column vector \(\vec{c}\text{.}\)

Let us check:

\begin{equation*} \frac{d}{dt} \vec{x} = \frac{d}{dt} \left( e^{tP} \vec{c}\, \right) = P e^{tP} \vec{c} = P \vec{x}. \end{equation*}

Hence \(e^{tP}\) is a fundamental matrix solution of the homogeneous system. If we find a way to compute the matrix exponential, we will have another method of solving constant coefficient homogeneous systems. It also makes it easy to solve for initial conditions. To solve \({\vec{x}\,}' = A \vec{x}\text{,}\) \(\vec{x}(0) = \vec{b}\text{,}\) we take the solution

\begin{equation*} \vec{x} = e^{tA} \vec{b} . \end{equation*}

This equation follows because \(e^{0A} = I\text{,}\) so \(\vec{x} (0) = e^{0A} \vec{b} = \vec{b}\text{.}\)

We mention a drawback of matrix exponentials. In general \(e^{A+B} \not= e^A e^B\text{.}\) The trouble is that matrices do not commute, that is, in general \(AB \not= BA\text{.}\) If you try to prove \(e^{A+B} \not= e^A e^B\) using the Taylor series, you will see why the lack of commutativity becomes a problem. However, it is still true that if \(AB = BA\text{,}\) that is, if \(A\) and \(B\) commute, then \(e^{A+B} = e^Ae^B\text{.}\) We will find this fact useful. Let us restate this as a theorem to make a point.

Subsection3.8.2Simple cases

In some instances it may work to just plug into the series definition. Suppose the matrix is diagonal. For example, \(D = \left[ \begin{smallmatrix} a & 0 \\ 0 & b \end{smallmatrix} \right]\text{.}\) Then

\begin{equation*} D^k = \begin{bmatrix} a^k & 0 \\ 0 & b^k \end{bmatrix} , \end{equation*}

and

\begin{equation*} e^D = I + D + \frac{1}{2} D^2 + \frac{1}{6} D^3 + \cdots = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} + \frac{1}{2} \begin{bmatrix} a^2 & 0 \\ 0 & b^2 \end{bmatrix} + \frac{1}{6} \begin{bmatrix} a^3 & 0 \\ 0 & b^3 \end{bmatrix} + \cdots = \begin{bmatrix} e^a & 0 \\ 0 & e^b \end{bmatrix} . \end{equation*}

So by this rationale we have that

\begin{equation*} e^I = \begin{bmatrix} e & 0\\ 0 & e \end{bmatrix} \qquad \text{and} \qquad e^{aI} = \begin{bmatrix} e^a & 0\\ 0 & e^a \end{bmatrix}. \end{equation*}

This makes exponentials of certain other matrices easy to compute. Notice for example that the matrix \(A = \left[ \begin{smallmatrix} 5 & 4 \\ -1 & 1 \end{smallmatrix} \right]\) can be written as \(3I + B\) where \(B = \left[ \begin{smallmatrix} 2 & 4 \\ -1 & -2 \end{smallmatrix} \right]\text{.}\) Notice that \(B^2 = \left[ \begin{smallmatrix} 0 & 0 \\ 0 & 0 \end{smallmatrix} \right]\text{.}\) So \(B^k = 0\) for all \(k \geq 2\text{.}\) Therefore, \(e^B = I + B\text{.}\) Suppose we actually want to compute \(e^{tA}\text{.}\) The matrices \(3tI\) and \(tB\) commute (exercise: check this) and \(e^{tB} = I + tB\text{,}\) since \({(tB)}^2 = t^2 B^2 = 0\text{.}\) We write

\begin{multline*} e^{tA} = e^{3tI + tB} = e^{3tI} e^{tB} = \begin{bmatrix} e^{3t} & 0 \\ 0 & e^{3t} \end{bmatrix} \left( I + tB \right) = \\ = \begin{bmatrix} e^{3t} & 0 \\ 0 & e^{3t} \end{bmatrix} \begin{bmatrix} 1+2t & 4t \\ -t & 1-2t \end{bmatrix} = \begin{bmatrix} (1+2t)\,e^{3t} & 4te^{3t} \\ -te^{3t} & (1-2t)\,e^{3t} \end{bmatrix} . \end{multline*}

So we have found a fundamental matrix solution for the system \({\vec{x}\,}' = A \vec{x}\text{.}\) Note that this matrix has a repeated eigenvalue with a defect; there is only one eigenvector for the eigenvalue 3. So we have found a perhaps easier way to handle this case. In fact, if a matrix \(A\) is \(2 \times 2\) and has an eigenvalue \(\lambda\) of multiplicity 2, then either \(A\) is diagonal, or \(A = \lambda I + B\) where \(B^2 = 0\text{.}\) This is a good exercise.

Exercise3.8.1

Suppose that \(A\) is \(2 \times 2\) and \(\lambda\) is the only eigenvalue. Show that \({(A - \lambda I)}^2 = 0\text{,}\) and therefore that we can write \(A = \lambda I + B\text{,}\) where \(B^2 = 0\text{.}\) Hint: First write down what does it mean for the eigenvalue to be of multiplicity 2. You will get an equation for the entries. Now compute the square of \(B\text{.}\)

Matrices \(B\) such that \(B^k = 0\) for some \(k\) are called nilpotent. Computation of the matrix exponential for nilpotent matrices is easy by just writing down the first \(k\) terms of the Taylor series.

Subsection3.8.3General matrices

In general, the exponential is not as easy to compute as above. We usually cannot write a matrix as a sum of commuting matrices where the exponential is simple for each one. But fear not, it is still not too difficult provided we can find enough eigenvectors. First we need the following interesting result about matrix exponentials. For two square matrices \(A\) and \(B\text{,}\) with \(B\) invertible, we have

\begin{equation*} e^{BAB^{-1}} = B e^A B^{-1} . \end{equation*}

This can be seen by writing down the Taylor series. First

\begin{equation*} {(BAB^{-1})}^2 = BAB^{-1} BAB^{-1} = BAIAB^{-1} = BA^2B^{-1} . \end{equation*}

And by the same reasoning \({(BAB^{-1})}^k = B A^k B^{-1}\text{.}\) Now write down the Taylor series for \(e^{BAB^{-1}}\text{:}\)

\begin{equation*} \begin{split} e^{BAB^{-1}} & = I + {BAB^{-1}} + \frac{1}{2} {(BAB^{-1})}^2 + \frac{1}{6} {(BAB^{-1})}^3 + \cdots \\ & = BB^{-1} + {BAB^{-1}} + \frac{1}{2} BA^2B^{-1} + \frac{1}{6} BA^3B^{-1} + \cdots \\ & = B \bigl( I + A + \frac{1}{2} A^2 + \frac{1}{6} A^3 + \cdots \bigr) B^{-1} \\ & = B e^A B^{-1} . \end{split} \end{equation*}

Given a square matrix \(A\text{,}\) we can sometimes write \(A = E D E^{-1}\text{,}\) where \(D\) is diagonal and \(E\) invertible. This procedure is called diagonalization. If we can do that, the computation of the exponential becomes easy. Adding \(t\) into the mix, we can then easily compute the exponential

\begin{equation*} e^{tA} = E e^{tD} E^{-1} . \end{equation*}

To diagonalize \(A\) we will need \(n\) linearly independent eigenvectors of \(A\text{.}\) Otherwise this method of computing the exponential does not work and we need to be trickier, but we will not get into such details. We let \(E\) be the matrix with the eigenvectors as columns. Let \(\lambda_1\text{,}\) \(\lambda_2\text{,}\) , \(\lambda_n\) be the eigenvalues and let \(\vec{v}_1\text{,}\) \(\vec{v}_2\text{,}\) , \(\vec{v}_n\) be the eigenvectors, then \(E = [\, \vec{v}_1 \quad \vec{v}_2 \quad \cdots \quad \vec{v}_n \,]\text{.}\) Let \(D\) be the diagonal matrix with the eigenvalues on the main diagonal. That is

\begin{equation*} D = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{bmatrix} . \end{equation*}

We compute

\begin{equation*} \begin{split} AE & = A [\, \vec{v}_1 \quad \vec{v}_2 \quad \cdots \quad \vec{v}_n \,] \\ & = [\, A\vec{v}_1 \quad A\vec{v}_2 \quad \cdots \quad A\vec{v}_n \,] \\ & = [\, \lambda_1 \vec{v}_1 \quad \lambda_2 \vec{v}_2 \quad \cdots \quad \lambda_n \vec{v}_n \,] \\ & = [\, \vec{v}_1 \quad \vec{v}_2 \quad \cdots \quad \vec{v}_n \,] D \\ & = ED . \end{split} \end{equation*}

The columns of \(E\) are linearly independent as these are linearly independent eigenvectors of \(A\text{.}\) Hence \(E\) is invertible. Since \(AE = ED\text{,}\) we multiply on the right by \(E^{-1}\) and we get

\begin{equation*} A = E D E^{-1}. \end{equation*}

This means that \(e^A = E e^D E^{-1}\text{.}\) Multiplying the matrix by \(t\) we obtain

\begin{equation} \boxed{~~ e^{tA} = Ee^{tD}E^{-1} = E \begin{bmatrix} e^{\lambda_1 t} & 0 & \cdots & 0 \\ 0 & e^{\lambda_2 t} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{\lambda_n t} \end{bmatrix} E^{-1} . ~~}\label{matexp_diagfundsol}\tag{4} \end{equation}

The formula (4), therefore, gives the formula for computing a fundamental matrix solution \(e^{tA}\) for the system \({\vec{x}\,}' = A \vec{x}\text{,}\) in the case where we have \(n\) linearly independent eigenvectors.

Notice that this computation still works when the eigenvalues and eigenvectors are complex, though then you will have to compute with complex numbers. It is clear from the definition that if \(A\) is real, then \(e^{tA}\) is real. So you will only need complex numbers in the computation and you may need to apply Euler's formula to simplify the result. If simplified properly the final matrix will not have any complex numbers in it.

Example3.8.1

Compute a fundamental matrix solution using the matrix exponentials for the system

\begin{equation*} \begin{bmatrix} x \\ y \end{bmatrix} ' = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} . \end{equation*}

Then compute the particular solution for the initial conditions \(x(0) = 4\) and \(y(0) = 2\text{.}\)

Let \(A\) be the coefficient matrix \(\left[ \begin{smallmatrix} 1 & 2 \\ 2 & 1 \end{smallmatrix} \right]\text{.}\) We first compute (exercise) that the eigenvalues are 3 and \(-1\) and corresponding eigenvectors are \(\left[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \right]\) and \(\left[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \right]\text{.}\) Hence we write

\begin{equation*} \begin{split} e^{t A} & = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} e^{3t} & 0 \\ 0 & e^{-t} \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}^{-1} \\ & = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} e^{3t} & 0 \\ 0 & e^{-t} \end{bmatrix} \frac{-1}{2} \begin{bmatrix} -1 & -1 \\ -1 & 1 \end{bmatrix} \\ & = \frac{-1}{2} \begin{bmatrix} e^{3t} & e^{-t} \\ e^{3t} & -e^{-t} \end{bmatrix} \begin{bmatrix} -1 & -1 \\ -1 & 1 \end{bmatrix} \\ & = \frac{-1}{2} \begin{bmatrix} -e^{3t}-e^{-t} & -e^{3t}+e^{-t} \\ -e^{3t}+e^{-t} & -e^{3t}-e^{-t} \end{bmatrix} = \begin{bmatrix} \frac{e^{3t}+e^{-t}}{2} & \frac{e^{3t}-e^{-t}}{2} \\ \frac{e^{3t}-e^{-t}}{2} & \frac{e^{3t}+e^{-t}}{2} \end{bmatrix} . \end{split} \end{equation*}

The initial conditions are \(x(0) = 4\) and \(y(0) = 2\text{.}\) Hence, by the property that \(e^{0A} = I\) we find that the particular solution we are looking for is \(e^{tA} \vec{b}\) where \(\vec{b}\) is \(\left[ \begin{smallmatrix} 4 \\ 2 \end{smallmatrix} \right]\text{.}\) Then the particular solution we are looking for is

\begin{equation*} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} \frac{e^{3t}+e^{-t}}{2} & \frac{e^{3t}-e^{-t}}{2} \\ \frac{e^{3t}-e^{-t}}{2} & \frac{e^{3t}+e^{-t}}{2} \end{bmatrix} \begin{bmatrix} 4 \\ 2 \end{bmatrix} = \begin{bmatrix} 2e^{3t}+2e^{-t} + e^{3t}-e^{-t} \\ 2e^{3t}-2e^{-t} + e^{3t}+e^{-t} \end{bmatrix} = \begin{bmatrix} 3e^{3t}+e^{-t} \\ 3e^{3t}-e^{-t} \end{bmatrix} . \end{equation*}

Subsection3.8.4Fundamental matrix solutions

We note that if you can compute a fundamental matrix solution in a different way, you can use this to find the matrix exponential \(e^{tA}\text{.}\) A fundamental matrix solution of a system of ODEs is not unique. The exponential is the fundamental matrix solution with the property that for \(t=0\) we get the identity matrix. So we must find the right fundamental matrix solution. Let \(X\) be any fundamental matrix solution to \({\vec{x}\,}' = A \vec{x}\text{.}\) Then we claim

\begin{equation*} e^{tA} = X(t) \left[ X(0) \right]^{-1} . \end{equation*}

Clearly, if we plug \(t=0\) into \(X(t) \left[ X(0) \right]^{-1}\) we get the identity. We can multiply a fundamental matrix solution on the right by any constant invertible matrix and we still get a fundamental matrix solution. All we are doing is changing what the arbitrary constants are in the general solution \(\vec{x}(t) = X(t)\, \vec{c}\text{.}\)

Subsection3.8.5Approximations

If you think about it, the computation of any fundamental matrix solution \(X\) using the eigenvalue method is just as difficult as the computation of \(e^{tA}\text{.}\) So perhaps we did not gain much by this new tool. However, the Taylor series expansion actually gives us a way to approximate solutions, which the eigenvalue method did not.

The simplest thing we can do is to just compute the series up to a certain number of terms. There are better ways to approximate the exponential 1 C. Moler and C.F. Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later, SIAM Review 45 (1), 2003, 3–49. In many cases however, few terms of the Taylor series give a reasonable approximation for the exponential and may suffice for the application. For example, let us compute the first 4 terms of the series for the matrix \(A = \left[ \begin{smallmatrix} 1 & 2 \\ 2 & 1 \end{smallmatrix} \right]\text{.}\)

\begin{multline*} e^{tA} \approx I + tA + \frac{t^2}{2}A^2 + \frac{t^3}{6}A^3 = I + t \begin{bmatrix} 1 & 2 \\ \noalign{\smallskip} 2 & 1 \end{bmatrix} + t^2 \begin{bmatrix} \frac{5}{2} & 2 \\ \noalign{\smallskip} 2 & \frac{5}{2} \end{bmatrix} + t^3 \begin{bmatrix} \frac{13}{6} & \frac{7}{3} \\ \noalign{\smallskip} \frac{7}{3} & \frac{13}{6} \end{bmatrix} = \\ = \begin{bmatrix} 1 + t + \frac{5}{2}\, t^2 + \frac{13}{6}\, t^3 & 2\,t + 2\, t^2 + \frac{7}{3}\, t^3 \\ \noalign{\smallskip} 2\,t + 2\, t^2 + \frac{7}{3}\, t^3 & 1 + t + \frac{5}{2}\, t^2 + \frac{13}{6}\, t^3 \end{bmatrix} . \end{multline*}

Just like the scalar version of the Taylor series approximation, the approximation will be better for small \(t\) and worse for larger \(t\text{.}\) For larger \(t\text{,}\) we will generally have to compute more terms. Let us see how we stack up against the real solution with \(t=0.1\text{.}\) The approximate solution is approximately (rounded to 8 decimal places)

\begin{equation*} e^{0.1\,A} \approx I + 0.1\,A + \frac{0.1^2}{2}A^2 + \frac{0.1^3}{6}A^3 = \begin{bmatrix} 1.12716667 & 0.22233333 \\ 0.22233333 & 1.12716667 \\ \end{bmatrix} . \end{equation*}

And plugging \(t=0.1\) into the real solution (rounded to 8 decimal places) we get

\begin{equation*} e^{0.1\,A} = \begin{bmatrix} 1.12734811 & 0.22251069 \\ 0.22251069 & 1.12734811 \end{bmatrix} . \end{equation*}

Not bad at all! Although if we take the same approximation for \(t=1\) we get

\begin{equation*} I + A + \frac{1}{2}A^2 + \frac{1}{6}A^3 = \begin{bmatrix} 6.66666667 & 6.33333333 \\ 6.33333333 & 6.66666667 \end{bmatrix} , \end{equation*}

while the real value is (again rounded to 8 decimal places)

\begin{equation*} e^{A} = \begin{bmatrix} 10.22670818 & \phantom{0}9.85882874 \\ \phantom{0}9.85882874 & 10.22670818 \end{bmatrix} . \end{equation*}

So the approximation is not very good once we get up to \(t=1\text{.}\) To get a good approximation at \(t=1\) (say up to 2 decimal places) we would need to go up to the \({11}^{\text{th}}\) power (exercise).

Subsection3.8.6Exercises

Exercise3.8.2

Using the matrix exponential, find a fundamental matrix solution for the system \(x' = 3x+y\text{,}\) \(y' = x+3y\text{.}\)

Exercise3.8.3

Find \(e^{tA}\) for the matrix \(A = \left[ \begin{smallmatrix} 2 & 3 \\ 0 & 2 \end{smallmatrix} \right]\text{.}\)

Exercise3.8.4

Find a fundamental matrix solution for the system \(x_1' = 7x_1+4x_2+ 12x_3\text{,}\) \(x_2' = x_1+2x_2+x_3\text{,}\) \(x_3' = -3x_1-2x_2- 5x_3\text{.}\) Then find the solution that satisfies \(\vec{x}(0) = \left[ \begin{smallmatrix} 0 \\ 1 \\ -2 \end{smallmatrix} \right]\text{.}\)

Exercise3.8.5

Compute the matrix exponential \(e^A\) for \(A = \left[ \begin{smallmatrix} 1 & 2 \\ 0 & 1 \end{smallmatrix} \right]\text{.}\)

Exercise3.8.6

(challenging)   Suppose \(AB = BA\text{.}\) Show that under this assumption, \(e^{A+B} = e^A e^B\text{.}\)

Exercise3.8.7

Use Exercise 3.8.6 to show that \({(e^{A})}^{-1} = e^{-A}\text{.}\) In particular this means that \(e^A\) is invertible even if \(A\) is not.

Exercise3.8.8

Suppose \(A\) is a matrix with eigenvalues \(-1\text{,}\) \(1\text{,}\) and corresponding eigenvectors \(\left[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \right]\text{,}\) \(\left[ \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right]\text{.}\) a) Find matrix \(A\) with these properties. b) Find a fundamental matrix solution to \({\vec{x}\,}' = A \vec{x}\text{.}\) c) Solve the system in with initial conditions \(\vec{x}(0) = \left[ \begin{smallmatrix} 2 \\ 3 \end{smallmatrix} \right]\) .

Exercise3.8.9

Suppose that \(A\) is an \(n \times n\) matrix with a repeated eigenvalue \(\lambda\) of multiplicity \(n\text{.}\) Suppose that there are \(n\) linearly independent eigenvectors. Show that the matrix is diagonal, in particular \(A = \lambda I\text{.}\) Hint: Use diagonalization and the fact that the identity matrix commutes with every other matrix.

Exercise3.8.10

Let \(A = \left[ \begin{smallmatrix} -1 & -1 \\ 1 & -3 \end{smallmatrix} \right]\text{.}\) a) Find \(e^{tA}\text{.}\) b) Solve \({\vec{x}\,}' = A \vec{x}\text{,}\) \(\vec{x}(0) = \left[ \begin{smallmatrix} 1 \\ -2 \end{smallmatrix} \right]\text{.}\)

Exercise3.8.11

Let \(A = \left[ \begin{smallmatrix} 1 & 2 \\ 3 & 4 \end{smallmatrix} \right]\text{.}\) Approximate \(e^{tA}\) by expanding the power series up to the third order.

Exercise3.8.101

Compute \(e^{tA}\) where \(A=\left[ \begin{smallmatrix} 1 & -2 \\ -2 & 1 \end{smallmatrix}\right]\text{.}\)

Answer

\(e^{tA}=\left[ \begin{smallmatrix} \frac{{e}^{3t}+{e}^{-t}}{2} & \quad \frac{{e}^{-t}-{e}^{3t}}{2}\\ \frac{{e}^{-t}-{e}^{3t}}{2} & \quad \frac{{e}^{3t}+{e}^{-t}}{2} \end{smallmatrix}\right]\)

Exercise3.8.102

Compute \(e^{tA}\) where \(A=\left[ \begin{smallmatrix} 1 & -3 & 2 \\ -2 & 1 & 2 \\ -1 & -3 & 4 \end{smallmatrix}\right]\text{.}\)

Answer

\(e^{tA}=\left[ \begin{smallmatrix} 2{e}^{3t}-4{e}^{2t}+3{e}^{t} & \quad \frac{3{e}^{t}}{2}-\frac{3{e}^{3t}}{2} & \quad -{e}^{3t}+4{e}^{2t}-3{e}^{t} \\ 2{e}^{t}-2{e}^{2t} & \quad {e}^{t} & \quad 2{e}^{2t}-2{e}^{t} \\ 2{e}^{3t}-5{e}^{2t}+3{e}^{t} & \quad \frac{3{e}^{t}}{2}-\frac{3{e}^{3t}}{2} & \quad -{e}^{3t}+5{e}^{2t}-3{e}^{t} \end{smallmatrix}\right]\)

Exercise3.8.103

a) Compute \(e^{tA}\) where \(A=\left[ \begin{smallmatrix} 3 & -1 \\ 1 & 1 \end{smallmatrix}\right]\text{.}\) b) Solve \(\vec{x}\,' = A \vec{x}\) for \(\vec{x}(0) = \left[ \begin{smallmatrix} 1 \\ 2 \end{smallmatrix}\right]\text{.}\)

Answer

a) \(e^{tA}=\left[ \begin{smallmatrix} ( t+1) \,{e}^{2t} & -t{e}^{2t} \\ t{e}^{2t} & ( 1-t) \,{e}^{2t} \end{smallmatrix}\right]\)     b) \(\vec{x}=\left[ \begin{smallmatrix} (1-t) \,{e}^{2t} \\ (2-t) \,{e}^{2t} \end{smallmatrix}\right]\)

Exercise3.8.104

Compute the first 3 terms (up to the second degree) of the Taylor expansion of \(e^{tA}\) where \(A=\left[ \begin{smallmatrix} 2 & 3 \\ 2 & 2 \end{smallmatrix}\right]\) (Write as a single matrix). Then use it to approximate \(e^{0.1A}\text{.}\)

Answer

\(\left[ \begin{smallmatrix} 1+2t+5t^2 & 3t+6t^2 \\ 2t+4t^2 & 1+2t+5t^2 \end{smallmatrix}\right]\)     \(e^{0.1A} \approx \left[ \begin{smallmatrix} 1.25 & 0.36 \\ 0.24 & 1.25 \end{smallmatrix}\right]\)

For a higher quality printout use the PDF version: https://www.jirka.org/diffyqs/diffyqs.pdf