[Notes on Diffy Qs home] [PDF version] [Buy paperback on Amazon]

[next] [prev] [prev-tail] [tail] [up]

Note: 2 lectures, similar to §3.8 in [EP], §10.1 and §11.1 in [BD]

Before we tackle the Fourier series, we need to study the so-called boundary value problems (or endpoint problems). For example, suppose we have

for some constant , where is deﬁned for in the interval . Unlike before, when we speciﬁed the value of the solution and its derivative at a single point, we now specify the value of the solution at two diﬀerent points. Note that is a solution to this equation, so existence of solutions is not a problem here. Uniqueness of solutions is another issue. The general solution to has two arbitrary constants present. It is, therefore, natural (but wrong) to believe that requiring two conditions guarantees a unique solution.

Example 4.1.1: Take , , . That is,

Then is another solution (besides ) satisfying both boundary conditions. There are more. Write down the general solution of the diﬀerential equation, which is . The condition forces . Letting does not give us any more information as already satisﬁes both boundary conditions. Hence, there are inﬁnitely many solutions of the form , where is an arbitrary constant.

Example 4.1.2: On the other hand, consider . That is,

Then the general solution is . Letting still forces . We apply the second condition to ﬁnd . As we obtain . Therefore is the unique solution to this problem.

What is going on? We will be interested in ﬁnding which constants allow a nonzero solution, and we will be interested in ﬁnding those solutions. This problem is an analogue of ﬁnding eigenvalues and eigenvectors of matrices.

For basic Fourier series theory we will need the following three eigenvalue problems. We will consider more general equations, but we will postpone this until chapter 5.

(4.1) |

(4.2) |

and

(4.3) |

A number is called an eigenvalue of (4.1) (resp. (4.2) or (4.3)) if and only if there exists a nonzero (not identically zero) solution to (4.1) (resp. (4.2) or (4.3)) given that speciﬁc . A nonzero solution is called a corresponding eigenfunction.

Note the similarity to eigenvalues and eigenvectors of matrices. The similarity is not just coincidental. If we think of the equations as diﬀerential operators, then we are doing the same exact thing. For example, let . We are looking for nonzero functions satisfying certain endpoint conditions that solve . A lot of the formalism from linear algebra can still apply here, though we will not pursue this line of reasoning too far.

Example 4.1.3: Let us ﬁnd the eigenvalues and eigenfunctions of

We have to handle the cases , , separately. First suppose that . Then the general solution to is

The condition implies immediately . Next

If is zero, then is not a nonzero solution. So to get a nonzero solution we must have that . Hence, must be an integer multiple of . In other words, for a positive integer . Hence the positive eigenvalues are for all integers . Corresponding eigenfunctions can be taken as . Just like for eigenvectors, we get all the multiples of an eigenfunction, so we only need to pick one.

Now suppose that . In this case the equation is , and its general solution is . The condition implies that , and implies that . This means that is not an eigenvalue.

Finally, suppose that . In this case we have the general solution

Letting implies that (recall and ). So our solution must be and satisfy . This is only possible if is zero. Why? Because is only zero when . You should plot sinh to see this fact. We can also see this from the deﬁnition of sinh. We get . Hence , which implies and that is only true if . So there are no negative eigenvalues.

In summary, the eigenvalues and corresponding eigenfunctions are

Example 4.1.4: Let us compute the eigenvalues and eigenfunctions of

Again we have to handle the cases , , separately. First suppose that . The general solution to is . So

The condition implies immediately . Next

Again cannot be zero if is to be an eigenvalue, and is only zero if for a positive integer . Hence the positive eigenvalues are again for all integers . And the corresponding eigenfunctions can be taken as .

Now suppose that . In this case the equation is and the general solution is so . The condition implies that . The condition also implies . Hence could be anything (let us take it to be 1). So is an eigenvalue and is a corresponding eigenfunction.

Finally, let . In this case the general solution is and hence

We have already seen (with roles of and switched) that for this expression to be zero at and , we must have . Hence there are no negative eigenvalues.

In summary, the eigenvalues and corresponding eigenfunctions are

and there is another eigenvalue

The following problem is the one that leads to the general Fourier series.

Example 4.1.5: Let us compute the eigenvalues and eigenfunctions of

We have not speciﬁed the values or the derivatives at the endpoints, but rather that they are the same at the beginning and at the end of the interval.

Let us skip . The computations are the same as before, and again we ﬁnd that there are no negative eigenvalues.

For , the general solution is . The condition implies that ( implies ). The second condition says nothing about and hence is an eigenvalue with a corresponding eigenfunction .

For we get that . Now

We remember that and . Therefore,

Hence either or . Similarly (exercise) if we diﬀerentiate and plug in the second condition we ﬁnd that or . Therefore, unless we want and to both be zero (which we do not) we must have . Hence, is an integer and the eigenvalues are yet again for an integer . In this case, however, is an eigenfunction for any and any . So we have two linearly independent eigenfunctions and . Remember that for a matrix we can also have two eigenvectors corresponding to a single eigenvalue if the eigenvalue is repeated.

In summary, the eigenvalues and corresponding eigenfunctions are

Something that will be very useful in the next section is the orthogonality property of the eigenfunctions. This is an analogue of the following fact about eigenvectors of a matrix. A matrix is called symmetric if . Eigenvectors for two distinct eigenvalues of a symmetric matrix are orthogonal. The diﬀerential operators we are dealing with act much like a symmetric matrix. We, therefore, get the following theorem.

Theorem 4.1.1. Suppose that and are two eigenfunctions of the problem (4.1), (4.2) or (4.3) for two diﬀerent eigenvalues and . Then they are orthogonal in the sense that

The terminology comes from the fact that the integral is a type of inner product. We will expand on this in the next section. The theorem has a very short, elegant, and illuminating proof so let us give it here. First, we have the following two equations.

Multiply the ﬁrst by and the second by and subtract to get

Now integrate both sides of the equation:

The last equality holds because of the boundary conditions. For example, if we consider (4.1) we have and so is zero at both and . As , the theorem follows.

Exercise 4.1.1 (easy): Finish the proof of the theorem (check the last equality in the proof) for the cases (4.2) and (4.3).

The function is an eigenfunction for the problem , , . Hence for positive integers and we have the integrals

Similarly

And ﬁnally we also get

and

We now touch on a very useful theorem in the theory of diﬀerential equations. The theorem holds in a more general setting than we are going to state it, but for our purposes the following statement is suﬃcient. We will give a slightly more general version in chapter 5.

Theorem 4.1.2 (Fredholm alternative^{1}). Exactly one of the following statements holds. Either

(4.4) |

has a nonzero solution, or

(4.5) |

has a unique solution for every function continuous on .

The theorem is also true for the other types of boundary conditions we considered. The theorem means that if is not an eigenvalue, the nonhomogeneous equation (4.5) has a unique solution for every right hand side. On the other hand if is an eigenvalue, then (4.5) need not have a solution for every , and furthermore, even if it happens to have a solution, the solution is not unique.

We also want to reinforce the idea here that linear diﬀerential operators have much in common with matrices. So it is no surprise that there is a ﬁnite dimensional version of Fredholm alternative for matrices as well. Let be an matrix. The Fredholm alternative then states that either has a nontrivial solution, or has a unique solution for every .

A lot of intuition from linear algebra can be applied to linear diﬀerential operators, but one must be careful of course. For example, one diﬀerence we have already seen is that in general a diﬀerential operator will have inﬁnitely many eigenvalues, while a matrix has only ﬁnitely many.

Let us consider a physical application of an endpoint problem. Suppose we have a tightly stretched quickly spinning elastic string or rope of uniform linear density , for example in . Let us put this problem into the -plane and both and are in meters. The axis represents the position on the string. The string rotates at angular velocity , in . Imagine that the whole -plane rotates at angular velocity . This way, the string stays in this -plane and measures its deﬂection from the equilibrium position, , on the axis. Hence the graph of gives the shape of the string. We consider an ideal string with no volume, just a mathematical curve. We suppose the tension on the string is a constant in Newtons. Assuming that the deﬂection is small, we can use Newton’s second law (let us skip the derivation) to get the equation

To check the units notice that the units of are , as the derivative is in terms of .

Let be the length of the string (in meters) and the string is ﬁxed at the beginning and end points. Hence, and . See Figure 4.1.

We rewrite the equation as . The setup is similar to Example 4.1.3, except for the interval length being instead of . We are looking for eigenvalues of where . As before there are no nonpositive eigenvalues. With , the general solution to the equation is . The condition implies that as before. The condition implies that and hence for some integer , so

What does this say about the shape of the string? It says that for all parameters , , not satisfying the above equation, the string is in the equilibrium position, . When , then the string will “pop out” some distance . We cannot compute with the information we have.

Let us assume that and are ﬁxed and we are changing . For most values of the string is in the equilibrium state. When the angular velocity hits a value , then the string pops out and has the shape of a sin wave crossing the axis times between the end points. When changes again, the string returns to the equilibrium position. The higher the angular velocity, the more times it crosses the axis when it is popped out.

For another example, if you have a spinning jump rope (then as it is completely “popped out”) and you pull on the ends to increase the tension, then the velocity also increases for the rope to stay “popped out”.

Hint for the following exercises: Note that when , then and are also solutions of the homogeneous equation.

Exercise 4.1.6: We skipped the case of for the boundary value problem . Finish the calculation and show that there are no negative eigenvalues.

Exercise 4.1.101: Consider a spinning string of length 2 and linear density 0.1 and tension 3. Find smallest angular velocity when the string pops out.

Exercise 4.1.102: Suppose and , . Find all for which there is more than one solution. Also ﬁnd the corresponding solutions (only for the eigenvalues).

Exercise 4.1.104: Consider and , . Why does it not have any eigenvalues? Why does any ﬁrst order equation with two endpoint conditions such as above have no eigenvalues?