Subsubsection3.9.1.1Integrating factor
Let us first focus on the nonhomogeneous first order equation
\begin{equation*}
{\vec{x}\,}'(t) = A\vec{x}(t) + \vec{f}(t) ,
\end{equation*}
where \(A\) is a constant matrix. The first method we look at is the integrating factor method. For simplicity we rewrite the equation as
\begin{equation*}
{\vec{x}\,}'(t) + P \vec{x}(t) = \vec{f}(t) ,
\end{equation*}
where \(P = -A\text{.}\) We multiply both sides of the equation by \(e^{tP}\) (being mindful that we are dealing with matrices that may not commute) to obtain
\begin{equation*}
e^{tP}{\vec{x}\,}'(t) + e^{tP}P\vec{x}(t) = e^{tP}\vec{f}(t) .
\end{equation*}
We notice that \(P e^{tP} = e^{tP} P\text{.}\) This fact follows by writing down the series definition of \(e^{tP}\text{:}\)
\begin{equation*}
\begin{split}
P e^{tP} & =
P \left(
I + tP + \frac{1}{2} {(tP)}^2 + \cdots \right)
=
P + tP^2 + \frac{1}{2} t^2P^3 + \cdots
=
\\
& =
\left(
I + tP + \frac{1}{2} {(tP)}^2 + \cdots \right) P
= e^{tP} P .
\end{split}
\end{equation*}
We have already seen that \(\frac{d}{dt} \left( e^{tP} \right)
= P e^{tP}\text{.}\) Hence,
\begin{equation*}
\frac{d}{dt}
\Bigl( e^{tP} \vec{x}(t) \Bigr) = e^{tP}\vec{f}(t) .
\end{equation*}
We can now integrate. That is, we integrate each component of the vector separately
\begin{equation*}
e^{tP} \vec{x}(t) = \int e^{tP}\vec{f}(t) ~ dt + \vec{c} .
\end{equation*}
Recall from Exercise 3.8.7 that \({(e^{tP})}^{-1} = e^{-tP}\text{.}\) Therefore, we obtain
\begin{equation*}
\vec{x}(t) = e^{-tP} \int e^{tP}\vec{f}(t) ~ dt + e^{-tP} \vec{c} .
\end{equation*}
Perhaps it is better understood as a definite integral. In this case it will be easy to also solve for the initial conditions. Consider the equation with initial conditions
\begin{equation*}
{\vec{x}\,}'(t) + P\vec{x}(t) = \vec{f}(t) ,
\qquad \vec{x}(0) = \vec{b} .
\end{equation*}
The solution can then be written as
\begin{equation}
\boxed{~~
\vec{x}(t) = e^{-tP} \int_0^t e^{sP}\vec{f}(s) ~ ds + e^{-tP} \vec{b} .
~~}\label{nhsys_intfacsoleq}\tag{5}
\end{equation}
Again, the integration means that each component of the vector \(e^{sP}\vec{f}(s)\) is integrated separately. It is not hard to see that (5) really does satisfy the initial condition \(\vec{x}(0) = \vec{b}\text{.}\)
\begin{equation*}
\vec{x}(0) = e^{-0P} \int_0^0 e^{sP}\vec{f}(s) ~ ds + e^{-0P} \vec{b}
= I \vec{b} = \vec{b} .
\end{equation*}
Example3.9.1
Suppose that we have the system
\begin{equation*}
\begin{aligned}
x_1' + 5x_1 - 3x_2 &= e^t , \\
x_2' + 3x_1 - x_2 &= 0 ,
\end{aligned}
\end{equation*}
with initial conditions \(x_1(0) = 1, x_2(0) = 0\text{.}\)
Let us write the system as
\begin{equation*}
{\vec{x}\,}' +
\begin{bmatrix} 5 & -3 \\ 3 & -1 \end{bmatrix}
\vec{x} =
\begin{bmatrix} e^t \\ 0 \end{bmatrix} ,
\qquad
\vec{x}(0) =
\begin{bmatrix} 1 \\ 0 \end{bmatrix} .
\end{equation*}
We have previously computed \(e^{tP}\) for \(P = \left[
\begin{smallmatrix} 5 & -3 \\ 3 & -1 \end{smallmatrix} \right]\text{.}\) We find \(e^{-tP}\text{,}\) simply by negating \(t\text{.}\)
\begin{equation*}
e^{tP} =
\begin{bmatrix}
(1+3t)\,e^{2t} & -3te^{2t} \\
3te^{2t} & (1-3t)\,e^{2t}
\end{bmatrix}
, \qquad
e^{-tP} =
\begin{bmatrix}
(1-3t)\,e^{-2t} & 3te^{-2t} \\
-3te^{-2t} & (1+3t)\,e^{-2t}
\end{bmatrix}
.
\end{equation*}
Instead of computing the whole formula at once, let us do it in stages. First
\begin{equation*}
\begin{split}
\int_0^t e^{sP}\vec{f}(s) ~ ds & =
\int_0^t
\begin{bmatrix}
(1+3s)\,e^{2s} & -3se^{2s} \\
3se^{2s} & (1-3s)\,e^{2s}
\end{bmatrix}
\begin{bmatrix} e^{s} \\ 0 \end{bmatrix}
~ ds
\\
& =
\int_0^t
\begin{bmatrix}
(1+3s)\,e^{3s} \\
3se^{3s}
\end{bmatrix}
~ ds
\\
&=
\begin{bmatrix}
\int_0^t (1+3s)\,e^{3s} ~ds \\
\int_0^t 3se^{3s} ~ds
\end{bmatrix}
\\
& =
\begin{bmatrix}
t e^{3t} \\
\frac{(3t-1) \,e^{3t} + 1}{3}
\end{bmatrix} \qquad \qquad \text{(used integration by parts).}
\end{split}
\end{equation*}
Then
\begin{equation*}
\begin{split}
\vec{x}(t)
& = e^{-tP} \int_0^t e^{sP}\vec{f}(s) ~ ds + e^{-tP} \vec{b} \\
& =
\begin{bmatrix}
(1-3t)\,e^{-2t} & 3te^{-2t} \\
-3te^{-2t} & (1+3t)\,e^{-2t}
\end{bmatrix}
\begin{bmatrix}
t e^{3t} \\
\frac{(3t-1) \,e^{3t} + 1}{3}
\end{bmatrix}
+
\begin{bmatrix}
(1-3t)\,e^{-2t} & 3te^{-2t} \\
-3te^{-2t} & (1+3t)\,e^{-2t}
\end{bmatrix}
\begin{bmatrix} 1 \\ 0 \end{bmatrix} \\
& =
\begin{bmatrix}
te^{-2t} \\
-\frac{e^t}{3}+\left( \frac{1}{3} + t \right) \, e^{-2t}
\end{bmatrix}
+
\begin{bmatrix}
(1-3t)\,e^{-2t} \\
-3te^{-2t}
\end{bmatrix} \\
& =
\begin{bmatrix}
(1-2t)\,e^{-2t} \\
-\frac{e^t}{3}+\left( \frac{1}{3} -2 t \right) \, e^{-2t}
\end{bmatrix} .
\end{split}
\end{equation*}
Phew!
Let us check that this really works.
\begin{equation*}
x_1' + 5 x_1 - 3x_2 = (4te^{-2t} - 4 e^{-2t}) + 5
(1-2t)\,e^{-2t}
+e^t-( 1 -6 t ) \, e^{-2t} = e^t .
\end{equation*}
Similarly (exercise) \(x_2' + 3 x_1 - x_2 = 0\text{.}\) The initial conditions are also satisfied (exercise).
For systems, the integrating factor method only works if \(P\) does not depend on \(t\text{,}\) that is, \(P\) is constant. The problem is that in general
\begin{equation*}
\frac{d}{dt} \left[ e^{\int P(t)\,dt} \right] \not= P(t) \, e^{\int P(t)\,dt} ,
\end{equation*}
because matrix multiplication is not commutative.
Subsubsection3.9.1.2Eigenvector decomposition
For the next method, note that eigenvectors of a matrix give the directions in which the matrix acts like a scalar. If we solve the system along these directions the computations are simpler as we treat the matrix as a scalar. We then put those solutions together to get the general solution for the system.
Take the equation
\begin{equation}
{\vec{x}\,}' (t) = A \vec{x}(t) + \vec{f}(t) .\label{nhsys_ednhsys}\tag{6}
\end{equation}
Assume \(A\) has \(n\) linearly independent eigenvectors \(\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\text{.}\) Write
\begin{equation}
\vec{x}(t) =
\vec{v}_1 \, \xi_1(t) +
\vec{v}_2 \, \xi_2(t) + \cdots +
\vec{v}_n \, \xi_n(t) .\label{nhsys_decompx}\tag{7}
\end{equation}
That is, we wish to write our solution as a linear combination of eigenvectors of \(A\text{.}\) If we solve for the scalar functions \(\xi_1\) through \(\xi_n\) we have our solution \(\vec{x}\text{.}\) Let us decompose \(\vec{f}\) in terms of the eigenvectors as well. We wish to write
\begin{equation}
\vec{f}(t) =
\vec{v}_1 \, g_1(t) +
\vec{v}_2 \, g_2(t) + \cdots +
\vec{v}_n \, g_n(t) .\label{nhsys_decompf}\tag{8}
\end{equation}
That is, we wish to find \(g_1\) through \(g_n\) that satisfy (8). Since all the eigenvectors are independent, the matrix \(E = [\, \vec{v}_1 \quad \vec{v}_2 \quad \cdots \quad \vec{v}_n \,]\) is invertible. Write the equation (8) as \(\vec{f} = E \vec{g}\text{,}\) where the components of \(\vec{g}\) are the functions \(g_1\) through \(g_n\text{.}\) Then \(\vec{g} = E^{-1} \vec{f}\text{.}\) Hence it is always possible to find \(\vec{g}\) when there are \(n\) linearly independent eigenvectors.
We plug (7) into (6), and note that \(A \vec{v}_k = \lambda_k \vec{v}_k\text{.}\)
\begin{equation*}
\begin{split}
\overbrace{
\vec{v}_1 \, \xi_1' +
\vec{v}_2 \, \xi_2' + \cdots +
\vec{v}_n \, \xi_n'
}^{{\vec{x}\,}'}
& =
\overbrace{
A \left( \vec{v}_1 \, \xi_1 +
\vec{v}_2 \, \xi_2 + \cdots +
\vec{v}_n \, \xi_n \right)
}^{A\vec{x}}
+
\overbrace{
\vec{v}_1 \, g_1 +
\vec{v}_2 \, g_2 + \cdots +
\vec{v}_n \, g_n
}^{\vec{f}}
\\
& =
A \vec{v}_1 \, \xi_1 +
A \vec{v}_2 \, \xi_2 + \cdots +
A \vec{v}_n \, \xi_n
+
\vec{v}_1 \, g_1 +
\vec{v}_2 \, g_2 + \cdots +
\vec{v}_n \, g_n
\\
& =
\vec{v}_1 \, \lambda_1 \, \xi_1 +
\vec{v}_2 \, \lambda_2 \, \xi_2 + \cdots +
\vec{v}_n \, \lambda_n \, \xi_n
+
\vec{v}_1 \, g_1 +
\vec{v}_2 \, g_2 + \cdots +
\vec{v}_n \, g_n
\\
& =
\vec{v}_1 \, ( \lambda_1 \, \xi_1 + g_1 ) +
\vec{v}_2 \, ( \lambda_2 \, \xi_2 + g_2 ) + \cdots +
\vec{v}_n \, ( \lambda_n \, \xi_n + g_n ) .
\end{split}
\end{equation*}
If we identify the coefficients of the vectors \(\vec{v}_1\) through \(\vec{v}_n\) we get the equations
\begin{equation*}
\begin{aligned}
\xi_1' & = \lambda_1 \, \xi_1 + g_1 , \\
\xi_2' & = \lambda_2 \, \xi_2 + g_2 , \\
& ~~ \vdots \\
\xi_n' & = \lambda_n \, \xi_n + g_n .
\end{aligned}
\end{equation*}
Each one of these equations is independent of the others. They are all linear first order equations and can easily be solved by the standard integrating factor method for single equations. That is, for the \(k^{\text{th}}\) equation we write
\begin{equation*}
\xi_k'(t) - \lambda_k \, \xi_k(t) = g_k(t) .
\end{equation*}
We use the integrating factor \(e^{-\lambda_k t}\) to find that
\begin{equation*}
\frac{d}{dt}\Bigl[ \xi_k(t) \, e^{-\lambda_k t} \Bigr] =
e^{-\lambda_k t} g_k(t) .
\end{equation*}
We integrate and solve for \(\xi_k\) to get
\begin{equation*}
\xi_k(t) = e^{\lambda_k t}
\int e^{-\lambda_k t} g_k(t) ~dt + C_k e^{\lambda_k t} .
\end{equation*}
If we are looking for just any particular solution, we could set \(C_k\) to be zero. If we leave these constants in, we get the general solution. Write \(\vec{x}(t) =
\vec{v}_1 \, \xi_1(t) +
\vec{v}_2 \, \xi_2(t) + \cdots +
\vec{v}_n \, \xi_n(t)\text{,}\) and we are done.
Again, as always, it is perhaps better to write these integrals as definite integrals. Suppose that we have an initial condition \(\vec{x}(0) = \vec{b}\text{.}\) Take \(\vec{a} = E^{-1} \vec{b}\) to find \(\vec{b} = \vec{v}_1 \, a_1 + \vec{v}_2 \, a_2 + \cdots + \vec{v}_n \, a_n\text{,}\) just like before. Then if we write
\begin{equation*}
\boxed{~~
\xi_k(t) = e^{\lambda_k t}
\int_0^t e^{-\lambda_k s} g_k(s) ~ds + a_k e^{\lambda_k t} ,
~~}
\end{equation*}
we actually get the particular solution \(\vec{x}(t) =
\vec{v}_1 \xi_1(t) +
\vec{v}_2 \xi_2(t) + \cdots +
\vec{v}_n \xi_n(t)\) satisfying \(\vec{x}(0) = \vec{b}\text{,}\) because \(\xi_k(0) = a_k\text{.}\)
Example3.9.2
Let \(A = \left[
\begin{smallmatrix}
1 & 3 \\
3 & 1
\end{smallmatrix} \right]\text{.}\) Solve \({\vec{x}\,}' = A \vec{x} +
\vec{f}\) where \(\vec{f}(t) =
\left[ \begin{smallmatrix}
2e^t \\
2t
\end{smallmatrix} \right]\) for \(\vec{x}(0) =
\left[ \begin{smallmatrix}
3/16 \\
-5/16
\end{smallmatrix} \right]\text{.}\)
The eigenvalues of \(A\) are \(-2\) and 4 and corresponding eigenvectors are \(\left[ \begin{smallmatrix}
1 \\
-1
\end{smallmatrix} \right]\) and \(\left[ \begin{smallmatrix}
1 \\
1
\end{smallmatrix} \right]\) respectively. This calculation is left as an exercise. We write down the matrix \(E\) of the eigenvectors and compute its inverse (using the inverse formula for \(2 \times 2\) matrices)
\begin{equation*}
E = \begin{bmatrix}
1 & 1 \\
-1 & 1
\end{bmatrix} ,
\qquad
E^{-1}
=
\frac{1}{2}
\begin{bmatrix}
1 & -1 \\
1 & 1
\end{bmatrix} .
\end{equation*}
We are looking for a solution of the form \(\vec{x} =
\left[ \begin{smallmatrix}
1 \\
-1
\end{smallmatrix} \right] \xi_1 +
\left[ \begin{smallmatrix}
1 \\
1
\end{smallmatrix} \right] \xi_2\text{.}\) We first need to write \(\vec{f}\) in terms of the eigenvectors. That is we wish to write \(\vec{f} =
\left[ \begin{smallmatrix}
2e^t \\
2t
\end{smallmatrix} \right] =
\left[ \begin{smallmatrix}
1 \\
-1
\end{smallmatrix} \right] g_1 +
\left[ \begin{smallmatrix}
1 \\
1
\end{smallmatrix} \right] g_2\text{.}\) Thus
\begin{equation*}
\begin{bmatrix}
g_1 \\
g_2
\end{bmatrix} =
E^{-1}
\begin{bmatrix}
2e^t \\
2t
\end{bmatrix}
=
\frac{1}{2}
\begin{bmatrix}
1 & -1 \\
1 & 1
\end{bmatrix}
\begin{bmatrix}
2e^t \\
2t
\end{bmatrix}
=
\begin{bmatrix}
e^t-t \\
e^t+t
\end{bmatrix} .
\end{equation*}
So \(g_1 = e^t-t\) and \(g_2 = e^t+t\text{.}\)
We further need to write \(\vec{x}(0)\) in terms of the eigenvectors. That is, we wish to write \(\vec{x}(0) =
\left[ \begin{smallmatrix}
3/16 \\
-5/16
\end{smallmatrix} \right] =
\left[ \begin{smallmatrix}
1 \\
-1
\end{smallmatrix} \right] a_1 +
\left[ \begin{smallmatrix}
1 \\
1
\end{smallmatrix} \right] a_2\text{.}\) Hence
\begin{equation*}
\begin{bmatrix}
a_1 \\
\noalign{\smallskip}
a_2
\end{bmatrix} =
E^{-1}
\begin{bmatrix}
\nicefrac{3}{16} \\
\noalign{\smallskip}
\nicefrac{-5}{16}
\end{bmatrix}
=
\begin{bmatrix}
\nicefrac{1}{4} \\
\noalign{\smallskip}
\nicefrac{-1}{16}
\end{bmatrix} .
\end{equation*}
So \(a_1 = \nicefrac{1}{4}\) and \(a_2 = \nicefrac{-1}{16}\text{.}\) We plug our \(\vec{x}\) into the equation and get that
\begin{equation*}
\begin{split}
\begin{bmatrix}
1 \\
-1
\end{bmatrix} \xi_1' +
\begin{bmatrix}
1 \\
1
\end{bmatrix} \xi_2'
& =
A
\begin{bmatrix}
1 \\
-1
\end{bmatrix} \xi_1 +
A
\begin{bmatrix}
1 \\
1
\end{bmatrix} \xi_2
+
\begin{bmatrix}
1 \\
-1
\end{bmatrix} g_1 +
\begin{bmatrix}
1 \\
1
\end{bmatrix} g_2
\\
& =
\begin{bmatrix}
1 \\
-1
\end{bmatrix} (-2\xi_1) +
\begin{bmatrix}
1 \\
1
\end{bmatrix} 4\xi_2
+
\begin{bmatrix}
1 \\
-1
\end{bmatrix} (e^t - t)
+
\begin{bmatrix}
1 \\
1
\end{bmatrix} (e^t + t) .
\end{split}
\end{equation*}
We get the two equations
\begin{equation*}
\begin{aligned}
& \xi_1' = -2\xi_1 + e^t -t, & & \text{where } \xi_1(0) = a_1 = \frac{1}{4} , \\
& \xi_2' = 4\xi_2 + e^t + t, & & \text{where } \xi_2(0) = a_2 = \frac{-1}{16} .
\end{aligned}
\end{equation*}
We solve with integrating factor. Computation of the integral is left as an exercise to the student. Note that you will need integration by parts.
\begin{equation*}
\xi_1 = e^{-2t}\int e^{2t} \, (e^t-t) ~ dt + C_1 e^{-2t} =
\frac{e^t}{3}-\frac{t}{2}+\frac{1}{4}+C_1 e^{-2t} .
\end{equation*}
\(C_1\) is the constant of integration. As \(\xi_1(0) = \nicefrac{1}{4}\text{,}\) then \(\nicefrac{1}{4}= \nicefrac{1}{3}
+ \nicefrac{1}{4} + C_1\) and hence \(C_1 = \nicefrac{-1}{3}\text{.}\) Similarly
\begin{equation*}
\xi_2 = e^{4t}\int e^{-4t} \, (e^t+ t) ~ dt + C_2 e^{4t} =
-\frac{e^t}{3}-\frac{t}{4}-\frac{1}{16} + C_2 e^{4t} .
\end{equation*}
As \(\xi_2(0) = \nicefrac{-1}{16}\) we have \(\nicefrac{-1}{16}= \nicefrac{-1}{3}
-\nicefrac{1}{16} + C_2\) and hence \(C_2 = \nicefrac{1}{3}\text{.}\) The solution is
\begin{equation*}
\vec{x}(t)=
\begin{bmatrix}
1 \\
-1
\end{bmatrix} \left( \frac{e^t-e^{-2t}}{3}+\frac{1-2t}{4} \right) +
\begin{bmatrix}
1 \\
1
\end{bmatrix} \left( \frac{e^{4t}-e^t}{3}-\frac{4t+1}{16} \right)
=
\begin{bmatrix}
\frac{e^{4t}-e^{-2t}}{3}+\frac{3-12t}{16} \\
\frac{e^{-2t}+e^{4t}+2e^t}{3}+\frac{4t-5}{16}
\end{bmatrix} .
\end{equation*}
That is, \(x_1 = \frac{e^{4t}-e^{-2t}}{3}+\frac{3-12t}{16}\) and \(x_2 = \frac{e^{-2t}+e^{4t}+2e^t}{3}+\frac{4t-5}{16}\text{.}\)
Exercise3.9.1
Check that \(x_1\) and \(x_2\) solve the problem. Check both that they satisfy the differential equation and that they satisfy the initial conditions.
Subsubsection3.9.1.3Undetermined coefficients
We also have the method of undetermined coefficients for systems. The only difference here is that we have to use unknown vectors rather than just numbers. Same caveats apply to undetermined coefficients for systems as for single equations. This method does not always work. Furthermore if the right hand side is complicated, we have to solve for lots of variables. Each element of an unknown vector is an unknown number. So in system of 3 equations if we have say 4 unknown vectors (this would not be uncommon), then we already have 12 unknown numbers that we need to solve for. The method can turn into a lot of tedious work. As this method is essentially the same as it is for single equations, let us just do an example.
Example3.9.3
Let \(A = \left[
\begin{smallmatrix}
-1 & 0 \\
-2 & 1
\end{smallmatrix} \right]\text{.}\) Find a particular solution of \({\vec{x}\,}' = A \vec{x} +
\vec{f}\) where \(\vec{f}(t) =
\left[ \begin{smallmatrix}
e^t \\
t
\end{smallmatrix} \right]\text{.}\)
Note that we can solve this system in an easier way (can you see how?), but for the purposes of the example, let us use the eigenvalue method plus undetermined coefficients.
The eigenvalues of \(A\) are \(-1\) and 1 and corresponding eigenvectors are \(\left[ \begin{smallmatrix}
1 \\
1
\end{smallmatrix} \right]\) and \(\left[ \begin{smallmatrix}
0 \\
1
\end{smallmatrix} \right]\) respectively. Hence our complementary solution is
\begin{equation*}
\vec{x}_c =
\alpha_1
\begin{bmatrix}
1 \\ 1
\end{bmatrix}
e^{-t}
+
\alpha_2
\begin{bmatrix}
0 \\ 1
\end{bmatrix}
e^{t} ,
\end{equation*}
for some arbitrary constants \(\alpha_1\) and \(\alpha_2\text{.}\)
We would want to guess a particular solution of
\begin{equation*}
\vec{x} =
\vec{a}
e^{t}
+
\vec{b}
t +
\vec{c} .
\end{equation*}
However, something of the form \(\vec{a} e^t\) appears in the complementary solution. Because we do not yet know if the vector \(\vec{a}\) is a multiple of \(\left[ \begin{smallmatrix}
0 \\
1
\end{smallmatrix} \right]\text{,}\) we do not know if a conflict arises. It is possible that there is no conflict, but to be safe we should also try \(\vec{b} t e^t\text{.}\) Here we find the crux of the difference for systems. We try both terms \(\vec{a} e^t\) and \(\vec{b} t e^t\) in the solution, not just the term \(\vec{b} t e^t\text{.}\) Therefore, we try
\begin{equation*}
\vec{x} =
\vec{a}
e^{t}
+
\vec{b}
t
e^{t}
+
\vec{c}
t +
\vec{d}.
\end{equation*}
Thus we have 8 unknowns. We write \(\vec{a} =
\Bigl[ \begin{smallmatrix} a_1 \\ a_2 \end{smallmatrix} \Bigr]\text{,}\) \(\vec{b} =
\Bigl[ \begin{smallmatrix} b_1 \\ b_2 \end{smallmatrix} \Bigr]\text{,}\) \(\vec{c} =
\Bigl[ \begin{smallmatrix} c_1 \\ c_2 \end{smallmatrix} \Bigr]\text{,}\) and \(\vec{d} =
\Bigl[ \begin{smallmatrix} d_1 \\ d_2 \end{smallmatrix} \Bigr]\text{.}\) We plug \(\vec{x}\) into the equation. First let us compute \({\vec{x}\,}'\text{.}\)
\begin{equation*}
{\vec{x}\,}' =
\left( \vec{a} + \vec{b} \right)
e^{t}
+
\vec{b}
t
e^{t}
+
\vec{c} =
\begin{bmatrix}
a_1 + b_1 \\ a_2+b_2
\end{bmatrix}
e^{t}
+
\begin{bmatrix}
b_1 \\ b_2
\end{bmatrix}
t e^{t}
+
\begin{bmatrix}
c_1 \\ c_2
\end{bmatrix} .
\end{equation*}
Now \({\vec{x}\,}'\) must equal \(A\vec{x} + \vec{f}\text{,}\) which is
\begin{multline*}
A \vec{x} + \vec{f} =
A \vec{a}
e^{t}
+
A \vec{b}
t e^{t}
+
A \vec{c}
t
+
A \vec{d}
+ \vec{f}
=
\\
=
\begin{bmatrix}
-a_1 \\ -2a_1+a_2
\end{bmatrix}
e^{t}
+
\begin{bmatrix}
-b_1 \\ -2b_1+b_2
\end{bmatrix}
t e^{t}
+
\begin{bmatrix}
-c_1 \\ -2c_1+c_2
\end{bmatrix}
t
+
\begin{bmatrix}
-d_1 \\ -2d_1+d_2
\end{bmatrix}
+
\begin{bmatrix}
1 \\ 0
\end{bmatrix}
e^t
+
\begin{bmatrix}
0 \\ 1
\end{bmatrix}
t .
\end{multline*}
We identify the coefficients of \(e^t\text{,}\) \(te^t\text{,}\) \(t\) and any constant vectors.
\begin{equation*}
\begin{aligned}
a_1+b_1 & = -a_1+1 , \\
a_2+b_2 & = -2a_1+a_2 , \\
b_1 & = -b_1 , \\
b_2 & = -2b_1+b_2 , \\
0 & = -c_1 , \\
0 & = -2c_1+c_2 + 1 , \\
c_1 & = -d_1 , \\
c_2 & = -2d_1+d_2 .
\end{aligned}
\end{equation*}
We could write the \(8 \times 9\) augmented matrix and start row reduction, but it is easier to just solve the equations in an ad hoc manner. Immediately we see that \(b_1 = 0\text{,}\) \(c_1 = 0\text{,}\) \(d_1 = 0\text{.}\) Plugging these back in, we get that \(c_2 = -1\) and \(d_2 = -1\text{.}\) The remaining equations that tell us something are
\begin{equation*}
\begin{aligned}
a_1 & = -a_1+1 , \\
a_2+b_2 & = -2a_1+a_2 .
\end{aligned}
\end{equation*}
So \(a_1 = \nicefrac{1}{2}\) and \(b_2 = -1\text{.}\) Finally, \(a_2\) can be arbitrary and still satisfy the equations. We are looking for just a single solution so presumably the simplest one is when \(a_2 = 0\text{.}\) Therefore,
\begin{equation*}
\vec{x} =
\vec{a}
e^{t}
+
\vec{b}
t
e^{t}
+
\vec{c}
t +
\vec{d}
=
\begin{bmatrix}
\nicefrac{1}{2} \\ 0
\end{bmatrix}
e^t
+
\begin{bmatrix}
0 \\ -1
\end{bmatrix}
te^t
+
\begin{bmatrix}
0 \\ -1
\end{bmatrix}
t
+
\begin{bmatrix}
0 \\ -1
\end{bmatrix}
=
\begin{bmatrix}
\frac{1}{2}\,e^t \\
-te^t - t - 1
\end{bmatrix} .
\end{equation*}
That is, \(x_1 = \frac{1}{2}\,e^t\text{,}\) \(x_2 =
-te^t - t - 1\text{.}\) We would add this to the complementary solution to get the general solution of the problem. Notice also that both \(\vec{a} e^t\) and \(\vec{b} te^t\) were really needed.
Exercise3.9.2
Check that \(x_1\) and \(x_2\) solve the problem. Also try setting \(a_2 = 1\) and again check these solutions. What is the difference between the two solutions we can obtain in this way?
As you can see, other than the handling of conflicts, undetermined coefficients works exactly the same as it did for single equations. However, the computations can get out of hand pretty quickly for systems. The equation we considered was pretty simple.