[Notes on Diffy Qs home] [PDF version] [Buy cheap paperback version]

[next] [prev] [prev-tail] [tail] [up]

3.2 Matrices and linear systems

Note: 1 and a half lectures, first part of §5.1 in [EP], §7.2 and §7.3 in [BD]

3.2.1 Matrices and vectors

Before we can start talking about linear systems of ODEs, we will need to talk about matrices, so let us review these briefly. A matrix is an m × n array of numbers (m rows and n columns). For example, we denote a 3 × 5 matrix as follows

 ⌊ ⌋ ||||a11 a12 a13 a14 a15|||| A = ||||a21 a22 a23 a24 a25||||. ⌈a31 a32 a33 a34 a35⌉

By a vector we will usually mean a column vector, that is an m × 1 matrix. If we mean a row vector we will explicitly say so (a row vector is a 1 × n matrix). We will usually denote matrices by upper case letters and vectors by lower case letters with an arrow such as ⃗x or ⃗b . By ⃗0 we will mean the vector of all zeros.

It is easy to define some operations on matrices. Note that we will want 1 × 1 matrices to really act like numbers, so our operations will have to be compatible with this viewpoint.

First, we can multiply by a scalar (a number). This means just multiplying each entry by the same number. For example,

 [1 2 3] [2 4 6 ] 2 = . 4 5 6 8 10 12

Matrix addition is also easy. We add matrices element by element. For example,

[1 2 3] [1 1 - 1] [2 3 2 ] + = . 4 5 6 0 2 4 4 7 10

If the sizes do not match, then addition is not defined.

If we denote by 0 the matrix of with all zero entries, by c , d scalars, and by A , B , C matrices, we have the following familiar rules.

 A + 0 = A = 0 + A, A + B = B + A, (A + B) + C = A + (B + C ), c(A + B) = cA + cB, (c + d )A = cA + dA.

Another useful operation for matrices is the so-called transpose. This operation just swaps rows and columns of a matrix. The transpose of A is denoted by AT . Example:

[ ]T ⌊||1 4⌋|| 1 2 3 = ||||2 5|||| 4 5 6 ||⌈ ||⌉ 3 6

3.2.2 Matrix multiplication

Let us now define matrix multiplication. First we define the so-called dot product (or inner product) of two vectors. Usually this will be a row vector multiplied with a column vector of the same size. For the dot product we multiply each pair of entries from the first and the second vector and we sum these products. The result is a single number. For example,

 ⌊ ⌋ [ ] |||b1||| a1 a2 a3 ⋅|||||b2||||| = a1b1 + a2b2 + a3b3. ⌈b3⌉

And similarly for larger (or smaller) vectors.

Armed with the dot product we define the product of matrices. First let us denote by rowi(A) the  th i row of A and by colum nj(A ) the jth column of A . For an m × n matrix A and an n × p matrix B we can define the product AB . We let AB be an m × p matrix whose ijth entry is the dot product

ro wi(A ) ⋅ colum nj(B).

Do note how the sizes match up. Example:

[ ] ⌊|1 0 -1⌋| 1 2 3 ||||| ||||| 4 5 6 ||⌈1 1 1 ||⌉ = [1 0 0 ] [ ] 1 ⋅ 1 + 2 ⋅ 1 + 3 ⋅ 1 1 ⋅ 0 + 2 ⋅ 1 + 3 ⋅ 0 1 ⋅ (-1 ) + 2 ⋅ 1 + 3 ⋅ 0 6 2 1 = 4 ⋅ 1 + 5 ⋅ 1 + 6 ⋅ 1 4 ⋅ 0 + 5 ⋅ 1 + 6 ⋅ 0 4 ⋅ (-1 ) + 5 ⋅ 1 + 6 ⋅ 0 = 15 5 1

For multiplication we want an analogue of a 1. This analogue is the so-called identity matrix. The identity matrix is a square matrix with 1s on the main diagonal and zeros everywhere else. It is usually denoted by I . For each size we have a different identity matrix and so sometimes we may denote the size as a subscript. For example, the I3 would be the 3 × 3 identity matrix

 ⌊ ⌋ |||||1 0 0||||| I = I3 = |||⌈0 1 0|||⌉. 0 0 1

We have the following rules for matrix multiplication. Suppose that A , B , C are matrices of the correct sizes so that the following make sense. Let α denote a scalar (number).

 A(BC ) = (AB )C, A(B + C) = AB + AC, (B + C )A = BA + CA, α (AB ) = (αA )B = A (αB ), IA = A = AI.

A few warnings are in order.

(i)
AB ⇔ BA in general (it may be true by fluke sometimes). That is, matrices do not commute. For example take  [1 1] A = 1 1 and  [1 0] B = 0 2 .
(ii)
AB = AC does not necessarily imply B = C , even if A is not 0.
(iii)
AB = 0 does not necessarily mean that A = 0 or B = 0 . For example take  [ ] A = B = 00 1 0 .

For the last two items to hold we would need to “divide” by a matrix. This is where the matrix inverse comes in. Suppose that A and B are n × n matrices such that

AB = I = BA.

Then we call B the inverse of A and we denote B by A -1 . If the inverse of A exists, then we call A invertible. If A is not invertible we sometimes say A is singular.

If A is invertible, then AB = AC does imply that B = C (in particular the inverse of A is unique). We just multiply both sides by A-1 to get A -1AB = A -1AC or IB = IC or B = C . It is also not hard to see that (A-1)-1 = A .

3.2.3 The determinant

Let us now discuss determinants of square matrices. We define the determinant of a 1 × 1 matrix as the value of its only entry. For a 2 × 2 matrix we define

 ([ ]) a b def det c d = ad - bc.

Before trying to compute the determinant for larger matrices, let us first note the meaning of the determinant. Consider an n × n matrix as a mapping of the n dimensional euclidean space ℝn to itself, where ⃗x gets sent to A ⃗x . In particular, a 2 × 2 matrix A is a mapping of the plane to itself. The determinant of A is the factor by which the area of objects gets changed. If we take the unit square (square of side 1) in the plane, then A takes the square to a parallelogram of area |det(A )| . The sign of det(A) denotes changing of orientation (negative if the axes get flipped). For example, let

 [ ] A = 1 1 . -1 1

Then det(A ) = 1 + 1 = 2 . Let us see where the square with vertices (0,0) , (1,0) , (0,1) , and (1,1) gets sent. Clearly (0,0 ) gets sent to (0,0) .

[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] 1 1 1 = 1 , 1 1 0 = 1 , 1 1 1 = 2 . -1 1 0 -1 - 1 1 1 1 -1 1 1 0

The image of the square is another square with vertices (0,0) , (1,-1 ) , (1,1 ) , and (2,0 ) . The image square has a side of length  -- √ 2 and is therefore of area 2.

If you think back to high school geometry, you may have seen a formula for computing the area of a parallelogram with vertices (0,0) , (a,c) , (b,d) and (a + b,c + d ) . And it is precisely

| ([ ])| ||| a b ||| ||d et c d ||.

The vertical lines above mean absolute value. The matrix [ ] ac bd carries the unit square to the given parallelogram.

Now we define the determinant for larger matrices. We define Aij as the matrix A with the ith row and the jth column deleted. To compute the determinant of a matrix, pick one row, say the ith row and compute:

|-----------n--------------------| | ∑ i+j | | det(A ) = (- 1) aijd et(Aij). | -----------j=1-------------------|

For the first row we get

 (| det(A) = a det(A ) - a det(A ) + a det(A ) - ⋅⋅⋅|{|+a 1ndet(A 1n) if n is odd, 11 11 12 12 13 13 |(- a1ndet(A 1n) if n even.

We alternately add and subtract the determinants of the submatrices Aij for a fixed i and all j . For a 3 × 3 matrix, picking the first row, we get det(A) = a11det(A 11) - a12det(A 12) + a13det(A 13) . For example,

 ( ⌊ ⌋) || ||1 2 3 |||| ([ ]) ( [ ]) ([ ]) det|||| ||||4 5 6 |||||||| = 1 ⋅ det 5 6 - 2 ⋅ det 4 6 + 3 ⋅ d et 4 5 ||( ||⌈ ||⌉||) 8 9 7 9 7 8 7 8 9 = 1(5 ⋅ 9 - 6 ⋅ 8) - 2 (4 ⋅ 9 - 6 ⋅ 7 ) + 3(4 ⋅ 8 - 5 ⋅ 7) = 0.

The numbers (-1 )i+jdet(Aij) are called cofactors of the matrix and this way of computing the determinant is called the cofactor expansion. It is also possible to compute the determinant by expanding along columns (picking a column instead of a row above).

Note that a common notation for the determinant is a pair of vertical lines:

|| || ([ ]) ||a b|| a b ||c d|| = det c d .

I personally find this notation confusing as vertical lines usually mean a positive quantity, while determinants can be negative. I will not use this notation in this book.

One of the most important properties of determinants (in the context of this course) is the following theorem.

Theorem 3.2.1. An n × n matrix A is invertible if and only if det(A) ⇔ 0 .

In fact, there is a formula for the inverse of a 2 × 2 matrix

[ ]-1 [ ] a b = --1---- d - b . c d ad - bc -c a

Notice the determinant of the matrix in the denominator of the fraction. The formula only works if the determinant is nonzero, otherwise we are dividing by zero.

3.2.4 Solving linear systems

One application of matrices we will need is to solve systems of linear equations. This is best shown by example. Suppose that we have the following system of linear equations

2x + 2x + 2x = 2, 1 2 3 x1 + x2 + 3x3 = 5, x1 + 4x2 + x3 = 10.

Without changing the solution, we could swap equations in this system, we could multiply any of the equations by a nonzero number, and we could add a multiple of one equation to another equation. It turns out these operations always suffice to find a solution.

It is easier to write the system as a matrix equation. The system above can be written as

⌊|2 2 2⌋| ⌊|x ⌋| ⌊|2 ⌋| |||| |||| |||| 1|||| |||| |||| |||⌈1 1 3|||⌉ |||⌈x2|||⌉ = |||⌈5 |||⌉. 1 4 1 x3 10

To solve the system we put the coefficient matrix (the matrix on the left hand side of the equation) together with the vector on the right and side and get the so-called augmented matrix

⌊|2 2 2 |2 ⌋| |||| | |||| |||⌈1 1 3 |5 |||⌉. 1 4 1 |10

We apply the following three elementary operations.

(i)
Swap two rows.
(ii)
Multiply a row by a nonzero number.
(iii)
Add a multiple of one row to another row.

We keep doing these operations until we get into a state where it is easy to read off the answer, or until we get into a contradiction indicating no solution, for example if we come up with an equation such as 0 = 1 .

Let us work through the example. First multiply the first row by 1∕2 to obtain

 | ⌊|1 1 1 |1 ⌋| ||||| | ||||| ||⌈1 1 3 |5 ||⌉. 1 4 1 |10

Now subtract the first row from the second and third row.

⌊ | ⌋ || 1 1 1 |1 || |||| 0 0 2 |4 |||| ||⌈ | ||⌉ 0 3 0 |9

Multiply the last row by 1∕3 and the second row by 1∕2 .

⌊ | ⌋ |||| 1 1 1 |1 |||| |||| 0 0 1 |2 |||| ⌈ 0 1 0 |3 ⌉

Swap rows 2 and 3.

⌊ | ⌋ ||| 1 1 1 |1 ||| |||| 0 1 0 |3 |||| |⌈ 0 0 1 |2 |⌉ |

Subtract the last row from the first, then subtract the second row from the first.

 | ⌊|| 1 0 0 |-4 ⌋|| |||| 0 1 0 |3 |||| ||⌈ | ||⌉ 0 0 1 |2

If we think about what equations this augmented matrix represents, we see that x1 = -4 , x 2 = 3 , and x3 = 2 . We try this solution in the original system and, voilà, it works!

Exercise 3.2.1: Check that the solution above really solves the given equations.

If we write this equation in matrix notation as

A⃗x = ⃗b,

where A is the matrix [2 2 2] 11 1 4 31 and ⃗b is the vector [2] 150 . The solution can be also computed via the inverse,

 -1 -1 ⃗x = A A⃗x = A ⃗b.

One last note to make about linear systems of equations is that it is possible that the solution is not unique (or that no solution exists). It is easy to tell if a solution does not exist. If during the row reduction you come up with a row where all the entries except the last one are zero (the last entry in a row corresponds to the right hand side of the equation) the system is inconsistent and has no solution. For example if for a system of 3 equations and 3 unknowns you find a row such as [0 0 0 | 1] in the augmented matrix, you know the system is inconsistent.

You generally try to use row operations until the following conditions are satisfied. The first nonzero entry in each row is called the leading entry.

(i)
There is only one leading entry in each column.
(ii)
All the entries above and below a leading entry are zero.
(iii)
All leading entries are 1.

Such a matrix is said to be in reduced row echelon form. The variables corresponding to columns with no leading entries are said to be free variables. Free variables mean that we can pick those variables to be anything we want and then solve for the rest of the unknowns.

Example 3.2.1: The following augmented matrix is in reduced row echelon form.

⌊| 1 2 0 |3 ⌋| |||| | |||| |||⌈ 0 0 1 |1 |||⌉ 0 0 0 |0

Suppose the variables are x1 , x2 , and x3 . Then x2 is the free variable, x1 = 3 - 2x2 , and x3 = 1 .

On the other hand if during the row reduction process you come up with the matrix

⌊ | ⌋ ||| 1 2 13 |3 ||| |||| 0 0 1 |1 ||||, |⌈ 0 0 0 |3 |⌉ |

there is no need to go further. The last row corresponds to the equation 0x 1 + 0x2 + 0x 3 = 3 , which is preposterous. Hence, no solution exists.

3.2.5 Computing the inverse

If the coefficient matrix is square and there exists a unique solution ⃗x to A⃗x = ⃗b for any ⃗b , then A is invertible. In fact by multiplying both sides by A -1 you can see that ⃗x = A -1⃗b . So it is useful to compute the inverse if you want to solve the equation for many different right hand sides ⃗ b .

We have a formula for the 2 × 2 inverse, but it is also not hard to compute inverses of larger matrices. While we will not have too much occasion to compute inverses for larger matrices than 2 × 2 by hand, let us touch on how to do it. Finding the inverse of A is actually just solving a bunch of linear equations. If we can solve A⃗x = ⃗e k k where ⃗e k is the vector with all zeros except a 1 at the kth position, then the inverse is the matrix with the columns ⃗xk for k = 1,...,n (exercise: why?). Therefore, to find the inverse we write a larger n × 2n augmented matrix [A | I] , where I is the identity matrix. We then perform row reduction. The reduced row echelon form of [A | I] will be of the form [I | A -1] if and only if A is invertible. We then just read off the inverse  -1 A .

3.2.6 Exercises

Exercise 3.2.2: Solve [ ] [ ] 13 2 4 ⃗x = 56 by using matrix inverse.

Exercise 3.2.3: Compute determinant of [9 -2 -6] -8 3 6 10 -2 -6 .

Exercise 3.2.4: Compute determinant of [ ] 14 20 3 5 10 6 0 7 0 8 0 10 1 . Hint: Expand along the proper row or column to make the calculations simpler.

Exercise 3.2.5: Compute inverse of [ ] 1 2 3 10 1 1 10 .

Exercise 3.2.6: For which h is [1 2 3] 47 5 8 6h not invertible? Is there only one such h ? Are there several? Infinitely many?

Exercise 3.2.7: For which h is [h 1 1] 0 h 0 1 1 h not invertible? Find all such h .

Exercise 3.2.8: Solve [ ] [] -98 -23 -66 ⃗x = 12 10 -2 -6 3 .

Exercise 3.2.9: Solve [ ] [] 58 34 7 4 20 6 3 3 ⃗x = 0 .

Exercise 3.2.10: Solve [3 2 3 0] [2] 30 32 3 4 3 2 ⃗x = 04 2 3 4 3 1 .

Exercise 3.2.11: Find 3 nonzero 2 × 2 matrices A , B , and C such that AB = AC but B ⇔ C .

Exercise 3.2.101: Compute determinant of [ ] 12 13 -15 1 -1 0

Exercise 3.2.102: Find t such that [ ] 1-1t 2 is not invertible.

Exercise 3.2.103: Solve [1 1] ⃗x = [10] 1 -1 20 .

Exercise 3.2.104: Suppose a,b,c are nonzero numbers. Let  [a 0] M = 0 b ,  [a 0 0] N = 00 b0 0 c . a) Compute M -1 . b) Compute N-1 .