###### Example5.1.1

[Bessel] Put the following equation into the form (1):

Multiply both sides by \(\frac{1}{x}\) to obtain

\(\require{cancel}\newcommand{\nicefrac}[2]{{{}^{#1}}\!/\!{{}_{#2}}}
\newcommand{\unitfrac}[3][\!\!]{#1 \,\, {{}^{#2}}\!/\!{{}_{#3}}}
\newcommand{\unit}[2][\!\!]{#1 \,\, #2}
\newcommand{\noalign}[1]{}
\newcommand{\qed}{\qquad \Box}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\)

We have encountered several different eigenvalue problems such as:

\begin{equation*}
X''(x) + \lambda X(x) = 0
\end{equation*}

with different boundary conditions

\begin{equation*}
\begin{array}{rrl}
X(0) = 0 & ~~X(L) = 0 & ~~\text{(Dirichlet) or}, \\
X'(0) = 0 & ~~X'(L) = 0 & ~~\text{(Neumann) or}, \\
X'(0) = 0 & ~~X(L) = 0 & ~~\text{(Mixed) or}, \\
X(0) = 0 & ~~X'(L) = 0 & ~~\text{(Mixed)}, \ldots
\end{array}
\end{equation*}

For example for the insulated wire, Dirichlet conditions correspond to applying a zero temperature at the ends, Neumann means insulating the ends, etc. Other types of endpoint conditions also arise naturally, such as the *Robin boundary conditions*

\begin{equation*}
hX(0) - X'(0) = 0 \qquad hX(L) + X'(L) = 0 ,
\end{equation*}

for some constant \(h\text{.}\) These conditions come up when the ends are immersed in some medium.

Boundary problems came up in the study of the heat equation \(u_t =
k u_{xx}\) when we were trying to solve the equation by the method of separation of variables. In the computation we encountered a certain eigenvalue problem and found the eigenfunctions \(X_n(x)\text{.}\) We then found the *eigenfunction decomposition* of the initial temperature \(f(x) = u(x,0)\) in terms of the eigenfunctions

\begin{equation*}
f(x) = \sum_{n=1}^\infty c_n X_n(x) .
\end{equation*}

Once we had this decomposition and found suitable \(T_n(t)\) such that \(T_n(0) = 1\) and \(T_n(t)X(x)\) were solutions, the solution to the original problem including the initial condition could be written as

\begin{equation*}
u(x,t) = \sum_{n=1}^\infty c_n T_n(t) X_n(x) .
\end{equation*}

We will try to solve more general problems using this method. First, we will study second order linear equations of the form

\begin{equation}
\frac{d}{dx} \left( p(x) \frac{dy}{dx} \right)
- q(x) y + \lambda r(x) y = 0 .\label{SL_eq}\tag{1}
\end{equation}

Essentially any second order linear equation of the form \(a(x) y'' + b(x) y' + c(x) y + \lambda d(x) y = 0\) can be written as (1) after multiplying by a proper factor.

[Bessel] Put the following equation into the form (1):

\begin{equation*}
x^2 y'' + xy' + \left(\lambda x^2 - n^2\right)y = 0 .
\end{equation*}

Multiply both sides by \(\frac{1}{x}\) to obtain

\begin{equation*}
\frac{1}{x} \left( x^2 y'' + xy' + \left(\lambda x^2 - n^2\right)y \right)
=
x y'' + y' + \left(\lambda x - \frac{n^2}{x}\right)y
=
\frac{d}{dx} \left( x \frac{dy}{dx} \right)
- \frac{n^2}{x} y + \lambda x y = 0.
\end{equation*}

The so-called *Sturm-Liouville problem*^{ 1 }Named after the French mathematicians Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882). is to seek nontrivial solutions to

\begin{equation}
\boxed{~~
\begin{aligned}
&\frac{d}{dx} \left( p(x) \frac{dy}{dx} \right)
- q(x) y + \lambda r(x) y = 0, \qquad a < x < b, \\
&\alpha_1 y(a) - \alpha_2 y'(a) = 0, \\
&\beta_1 y(b) + \beta_2 y'(b) = 0.
\end{aligned}
~~}\label{sl_slprob}\tag{2}
\end{equation}

In particular, we seek \(\lambda\)s that allow for nontrivial solutions. The \(\lambda\)s that admit nontrivial solutions are called the *eigenvalues* and the corresponding nontrivial solutions are called *eigenfunctions*. The constants \(\alpha_1\) and \(\alpha_2\) should not be both zero, same for \(\beta_1\) and \(\beta_2\text{.}\)

Suppose \(p(x)\text{,}\) \(p'(x)\text{,}\) \(q(x)\) and \(r(x)\) are continuous on \([a,b]\) and suppose \(p(x) > 0\) and \(r(x) > 0\) for all \(x\) in \([a,b]\text{.}\) Then the Sturm-Liouville problem (2) has an increasing sequence of eigenvalues

\begin{equation*}
\lambda_1 < \lambda_2 < \lambda_3 < \cdots
\end{equation*}

such that

\begin{equation*}
\lim_{n \to \infty} \lambda_n = +\infty
\end{equation*}

and such that to each \(\lambda_n\) there is (up to a constant multiple) a single eigenfunction \(y_n(x)\text{.}\)

Moreover, if \(q(x) \geq 0\) and \(\alpha_1, \alpha_2, \beta_1, \beta_2 \geq 0\text{,}\) then \(\lambda_n \geq 0\) for all \(n\text{.}\)

Problems satisfying the hypothesis of the theorem are called *regular Sturm-Liouville problems* and we will only consider such problems here. That is, a regular problem is one where \(p(x)\text{,}\) \(p'(x)\text{,}\) \(q(x)\) and \(r(x)\) are continuous, \(p(x) > 0\text{,}\) \(r(x) > 0\text{,}\) \(q(x) \geq 0\text{,}\) and \(\alpha_1, \alpha_2, \beta_1, \beta_2 \geq 0\text{.}\) Note: Be careful about the signs. Also be careful about the inequalities for \(r\) and \(p\text{,}\) they must be strict for all \(x\text{!}\)

When zero is an eigenvalue, we usually start labeling the eigenvalues at 0 rather than at 1 for convenience.

The problem \(y''+\lambda y\text{,}\) \(0 < x < L\text{,}\) \(y(0) = 0\text{,}\) and \(y(L) = 0\) is a regular Sturm-Liouville problem. \(p(x) = 1\text{,}\) \(q(x) = 0\text{,}\) \(r(x) = 1\text{,}\) and we have \(p(x) = 1 > 0\) and \(r(x) = 1 > 0\text{.}\) The eigenvalues are \(\lambda_n = \frac{n^2 \pi^2}{L^2}\) and eigenfunctions are \(y_n(x) = \sin(\frac{n\pi}{L} x)\text{.}\) All eigenvalues are nonnegative as predicted by the theorem.

Find eigenvalues and eigenfunctions for

\begin{equation*}
y'' + \lambda y = 0, \quad y'(0) = 0, \quad y'(1) = 0.
\end{equation*}

Identify the \(p, q, r, \alpha_j, \beta_j\text{.}\) Can you use the theorem to make the search for eigenvalues easier? (Hint: Consider the condition \(-y'(0)=0\))

Find eigenvalues and eigenfunctions of the problem

\begin{equation*}
\begin{aligned}
& y''+\lambda y = 0, \quad 0 < x < 1 , \\
& hy(0)- y'(0) = 0, \quad y'(1) = 0, \quad h > 0.
\end{aligned}
\end{equation*}

These equations give a regular Sturm-Liouville problem.

Identify \(p, q, r, \alpha_j, \beta_j\) in the example above.

First note that \(\lambda \geq 0\) by Theorem 5.1.1. Therefore, the general solution (without boundary conditions) is

\begin{equation*}
\begin{aligned}
& y(x) = A \cos (\! \sqrt{\lambda}\, x) + B \sin (\!
\sqrt{\lambda}\, x) & & \qquad \text{if } \; \lambda > 0 , \\
& y(x) = A x + B & & \qquad \text{if } \; \lambda = 0 .
\end{aligned}
\end{equation*}

Let us see if \(\lambda = 0\) is an eigenvalue: We must satisfy \(0 = hB - A\) and \(A = 0\text{,}\) hence \(B=0\) (as \(h > 0\)). Therefore, 0 is not an eigenvalue (no nonzero solution, so no eigenfunction).

Now let us try \(\lambda > 0\text{.}\) We plug in the boundary conditions.

\begin{equation*}
\begin{aligned}
& 0 = h A - \sqrt{\lambda}\, B , \\
& 0 = -A \sqrt{\lambda}\, \sin (\!\sqrt{\lambda}) +B \sqrt{\lambda}\,
\cos (\!\sqrt{\lambda}) .
\end{aligned}
\end{equation*}

If \(A=0\text{,}\) then \(B=0\) and vice-versa, hence both are nonzero. So \(B = \frac{hA}{\sqrt{\lambda}}\text{,}\) and \(0 = -A \sqrt{\lambda}\, \sin (\! \sqrt{\lambda}) + \frac{hA}{\sqrt{\lambda}} \sqrt{\lambda}\, \cos (\! \sqrt{\lambda})\text{.}\) As \(A \not= 0\) we get

\begin{equation*}
0 =
- \sqrt{\lambda}\, \sin (\! \sqrt{\lambda}) + h \cos (\! \sqrt{\lambda}) ,
\end{equation*}

or

\begin{equation*}
\frac{h}{\sqrt{\lambda}} = \tan \sqrt{\lambda} .
\end{equation*}

Now use a computer to find \(\lambda_n\text{.}\) There are tables available, though using a computer or a graphing calculator is far more convenient nowadays. Easiest method is to plot the functions \(\nicefrac{h}{x}\) and \(\tan x\) and see for which \(x\) they intersect. There is an infinite number of intersections. Denote the first intersection by \(\sqrt{\lambda_1}\text{,}\) the second intersection by \(\sqrt{\lambda_2}\text{,}\) etc. For example, when \(h=1\text{,}\) we get \(\sqrt{\lambda_1} \approx 0.86\text{,}\) \(\sqrt{\lambda_2} \approx 3.43\text{,}\) . That is \(\lambda_1 \approx 0.74\text{,}\) \(\lambda_2 \approx 11.73\text{,}\) . A plot for \(h=1\) is given in Figure 6.1.7. The appropriate eigenfunction (let \(A = 1\) for convenience, then \(B=\nicefrac{h}{\sqrt{\lambda}}\)) is

\begin{equation*}
y_n(x) = \cos (\! \sqrt{\lambda_n}\, x ) + \frac{h}{\sqrt{\lambda_n}} \,
\sin (\!\sqrt{\lambda_n} \, x ) .
\end{equation*}

When \(h=1\) we get (approximately)

\begin{equation*}
y_1(x) \approx \cos (0.86\, x ) + \frac{1}{0.86} \,
\sin (0.86 \, x ) , \qquad
y_2(x) \approx \cos (3.43\, x ) + \frac{1}{3.43} \,
\sin (3.43 \, x ) , \qquad \ldots .
\end{equation*}

We have seen the notion of orthogonality before. For example, we have shown that \(\sin (nx)\) are orthogonal for distinct \(n\) on \([0,\pi]\text{.}\) For general Sturm-Liouville problems we will need a more general setup. Let \(r(x)\) be a *weight function* (any function, though generally we will assume it is positive) on \([a,b]\text{.}\) Two functions \(f(x)\text{,}\) \(g(x)\) are said to be *orthogonal* with respect to the weight function \(r(x)\) when

\begin{equation*}
\int_a^b f(x) \, g(x) \, r(x) ~dx = 0 .
\end{equation*}

In this setting, we define the *inner product* as

\begin{equation*}
\langle f , g \rangle \overset{\text{def}}{=} \int_a^b f(x) \, g(x) \, r(x) ~dx ,
\end{equation*}

and then say \(f\) and \(g\) are orthogonal whenever \(\langle f , g \rangle = 0\text{.}\) The results and concepts are again analogous to finite dimensional linear algebra.

The idea of the given inner product is that those \(x\) where \(r(x)\) is greater have more weight. Nontrivial (nonconstant) \(r(x)\) arise naturally, for example from a change of variables. Hence, you could think of a change of variables such that \(d\xi = r(x)~ dx\text{.}\)

We have the following orthogonality property of eigenfunctions of a regular Sturm-Liouville problem.

Suppose we have a regular Sturm-Liouville problem

\begin{equation*}
\begin{aligned}
&\frac{d}{dx} \left( p(x) \frac{dy}{dx} \right)
- q(x) y + \lambda r(x) y = 0 , \\
&\alpha_1 y(a) - \alpha_2 y'(a) = 0 , \\
&\beta_1 y(b) + \beta_2 y'(b) = 0 .
\end{aligned}
\end{equation*}

Let \(y_j\) and \(y_k\) be two distinct eigenfunctions for two distinct eigenvalues \(\lambda_j\) and \(\lambda_k\text{.}\) Then

\begin{equation*}
\int_a^b y_j(x) \, y_k(x) \, r(x) ~dx = 0,
\end{equation*}

that is, \(y_j\) and \(y_k\) are orthogonal with respect to the weight function \(r\text{.}\)

Proof is very similar to the analogous theorem from Section 4.1. It can also be found in many books including, for example, Edwards and Penney [EP].

We also have the *Fredholm alternative* theorem we talked about before for all regular Sturm-Liouville problems. We state it here for completeness.

Suppose that we have a regular Sturm-Liouville problem. Then either

\begin{equation*}
\begin{aligned}
&\frac{d}{dx} \left( p(x) \frac{dy}{dx} \right)
- q(x) y + \lambda r(x) y = 0 , \\
&\alpha_1 y(a) - \alpha_2 y'(a) = 0 , \\
&\beta_1 y(b) + \beta_2 y'(b) = 0 ,
\end{aligned}
\end{equation*}

has a nonzero solution, or

\begin{equation*}
\begin{aligned}
&\frac{d}{dx} \left( p(x) \frac{dy}{dx} \right)
- q(x) y + \lambda r(x) y = f(x) , \\
&\alpha_1 y(a) - \alpha_2 y'(a) = 0 , \\
&\beta_1 y(b) + \beta_2 y'(b) = 0 ,
\end{aligned}
\end{equation*}

has a unique solution for any \(f(x)\) continuous on \([a,b]\text{.}\)

This theorem is used in much the same way as we did before in Section 4.4. It is used when solving more general nonhomogeneous boundary value problems. The theorem does not help us solve the problem, but it tells us when a unique solution exists, so that we know when to spend time looking for it. To solve the problem we decompose \(f(x)\) and \(y(x)\) in terms of eigenfunctions of the homogeneous problem, and then solve for the coefficients of the series for \(y(x)\text{.}\)

What we want to do with the eigenfunctions once we have them is to compute the *eigenfunction decomposition* of an arbitrary function \(f(x)\text{.}\) That is, we wish to write

\begin{equation}
f(x) = \sum_{n=1}^\infty c_n y_n(x) ,\label{sl_fdecomp}\tag{3}
\end{equation}

where \(y_n(x)\) are eigenfunctions. We wish to find out if we can represent any function \(f(x)\) in this way, and if so, we wish to calculate \(c_n\) (and of course we would want to know if the sum converges). OK, so imagine we could write \(f(x)\) as (3). We will assume convergence and the ability to integrate the series term by term. Because of orthogonality we have

\begin{equation*}
\begin{split}
\langle f , y_m \rangle & =
\int_a^b f(x) \, y_m (x) \, r(x) ~ dx\\
&= \sum_{n=1}^\infty c_n \int_a^b y_n(x) \, y_m (x) \, r(x) ~ dx\\
&= c_m \int_a^b y_m(x) \, y_m (x) \, r(x) ~ dx = c_m \langle y_m , y_m \rangle
.
\end{split}
\end{equation*}

Hence,

\begin{equation}
\boxed{~~
c_m = \frac{\langle f , y_m \rangle}{\langle y_m , y_m \rangle}
=
\frac{\int_a^b f(x) \, y_m (x)\, r(x) ~ dx}{\int_a^b {\bigl(y_m(x)\bigr)}^2 \, r(x) ~dx} .
~~}\label{sl_cm}\tag{4}
\end{equation}

Note that \(y_m\) are known up to a constant multiple, so we could have picked a scalar multiple of an eigenfunction such that \(\langle y_m , y_m \rangle = 1\) (if we had an arbitrary eigenfunction \(\tilde{y}_m\text{,}\) divide it by \(\sqrt{\langle \tilde{y}_m , \tilde{y}_m \rangle}\)). When \(\langle y_m , y_m \rangle = 1\) we have the simpler form \(c_m = \langle f, y_m \rangle\) as we did for the Fourier series. The following theorem holds more generally, but the statement given is enough for our purposes.

Suppose \(f\) is a piecewise smooth continuous function on \([a,b]\text{.}\) If \(y_1, y_2, \ldots\) are eigenfunctions of a regular Sturm-Liouville problem, one for each eigenvalue, then there exist real constants \(c_1, c_2, \ldots\) given by (4) such that (3) converges and holds for \(a < x < b\text{.}\)

Take the simple Sturm-Liouville problem

\begin{equation*}
\begin{aligned}
& y'' + \lambda y = 0, \quad 0 < x < \frac{\pi}{2} , \\
& y(0) =0, \quad y'\left(\frac{\pi}{2}\right) = 0 .
\end{aligned}
\end{equation*}

The above is a regular problem and furthermore we know by Theorem 5.1.1 that \(\lambda \geq 0\text{.}\)

Suppose \(\lambda = 0\text{,}\) then the general solution is \(y(x) = Ax + B\text{,}\) we plug in the initial conditions to get \(0=y(0) = B\text{,}\) and \(0 = y'(\frac{\pi}{2}) = A\text{,}\) hence \(\lambda = 0\) is not an eigenvalue. The general solution, therefore, is

\begin{equation*}
y(x) = A \cos (\! \sqrt{\lambda} \, x ) + B \sin (\! \sqrt{\lambda} \, x) .
\end{equation*}

Plugging in the boundary conditions we get \(0 = y(0) = A\) and \(0 = y'\bigl(\frac{\pi}{2}\bigr) = \sqrt{\lambda} \, B \cos \bigl(\!\sqrt{\lambda} \, \frac{\pi}{2}\bigr)\text{.}\) \(B\) cannot be zero and hence \(\cos \bigl(\! \sqrt{\lambda} \, \frac{\pi}{2}\bigr) = 0\text{.}\) This means that \(\sqrt{\lambda} \,\frac{\pi}{2}\) must be an odd integral multiple of \(\frac{\pi}{2}\text{,}\) i.e. \((2n-1)\frac{\pi}{2} = \sqrt{\lambda_n} \,\frac{\pi}{2}\text{.}\) Hence

\begin{equation*}
\lambda_n = {(2n-1)}^2 .
\end{equation*}

We can take \(B = 1\text{.}\) Hence our eigenfunctions are

\begin{equation*}
y_n(x) = \sin \bigl( (2n-1)x \bigr) .
\end{equation*}

Finally we compute

\begin{equation*}
\int_0^{\frac{\pi}{2}} {\Bigl( \sin \bigl( (2n-1)x \bigr) \Bigr)}^2 ~ dx
= \frac{\pi}{4} .
\end{equation*}

So any piecewise smooth function on \([0,\frac{\pi}{2}]\) can be written as

\begin{equation*}
f(x) = \sum_{n=1}^\infty c_n \sin \bigl( (2n-1)x \bigr) ,
\end{equation*}

where

\begin{equation*}
c_n = \frac{\langle f , y_n \rangle}{\langle y_n , y_n \rangle}
= \frac{\int_0^{\frac{\pi}{2}} f(x) \, \sin \bigl( (2n-1)x \bigr) ~ dx
}{\int_0^{\frac{\pi}{2}} {\Bigl(\sin \bigl((2n-1)x\bigr)\Bigr)}^2 ~ dx}
= \frac{4}{\pi} \int_0^{\frac{\pi}{2}} f(x) \,\sin \bigl( (2n-1)x \bigr) ~ dx .
\end{equation*}

Note that the series converges to an odd \(2\pi\)-periodic (not \(\pi\)-periodic!) extension of \(f(x)\text{.}\)

*(challenging)* In the above example, the function is defined on \(0 < x < \frac{\pi}{2}\text{,}\) yet the series converges to an odd \(2\pi\)-periodic extension of \(f(x)\text{.}\) Find out how is the extension defined for \(\frac{\pi}{2} < x < \pi\text{.}\)

Find eigenvalues and eigenfunctions of

\begin{equation*}
y''+\lambda y = 0, \quad y(0)- y'(0) = 0, \quad y(1) = 0 .
\end{equation*}

Expand the function \(f(x) = x\) on \(0 \leq x \leq 1\) using eigenfunctions of the system

\begin{equation*}
y'' + \lambda y = 0, \quad y'(0) = 0, \quad y(1) = 0 .
\end{equation*}

Suppose that you had a Sturm-Liouville problem on the interval \([0,1]\) and came up with \(y_n(x) = \sin (\gamma n x)\text{,}\) where \(\gamma > 0\) is some constant. Decompose \(f(x) = x\text{,}\) \(0 < x < 1\) in terms of these eigenfunctions.

Find eigenvalues and eigenfunctions of

\begin{equation*}
y^{(4)}+\lambda y = 0, \quad y(0) = 0, \quad y'(0) = 0, \quad y(1) = 0, \quad
y'(1) = 0 .
\end{equation*}

This problem is not a Sturm-Liouville problem, but the idea is the same.

*(more challenging)* Find eigenvalues and eigenfunctions for

\begin{equation*}
\frac{d}{dx} (e^x y') + \lambda e^x y = 0, \quad y(0) = 0, \quad y(1) = 0 .
\end{equation*}

Hint: First write the system as a constant coefficient system to find general solutions. Do note that Theorem 5.1.1 guarantees \(\lambda \geq 0\text{.}\)

Find eigenvalues and eigenfunctions of

\begin{equation*}
y'' + \lambda y = 0, \quad y(-1) = 0, \quad y(1) = 0 .
\end{equation*}

Answer

\(\lambda_n = \frac{(2n-1)\pi}{2}\text{,}\) \(n=1,2,3,\ldots\text{,}\) \(y_n = \cos\left(\frac{(2n-1)\pi}{2} x\right)\)

Put the following problems into the standard form for Sturm-Liouville problems, that is, find \(p(x)\text{,}\) \(q(x)\text{,}\) \(r(x)\text{,}\) \(\alpha_1\text{,}\) \(\alpha_2\text{,}\) \(\beta_1\text{,}\) and \(\beta_2\text{,}\) and decide if the problems are regular or not.

a) \(x y'' + \lambda y = 0\) for \(0 < x < 1\text{,}\) \(y(0) = 0\text{,}\) \(y(1) = 0\text{,}\)

b)^{ 2 }In a previous version of the book, a typo rendered the equation as \((1+x^2) y'' - 2xy' + (\lambda-x^2) y = 0\) ending up with something harder than intended. Try this equation for a further challenge. \((1+x^2) y'' + 2xy' + (\lambda-x^2) y = 0\) for \(-1 < x < 1\text{,}\) \(y(-1) = 0\text{,}\) \(y(1)+y'(1) =
0\text{.}\)

Answer

a) \(p(x) = 1\text{,}\) \(q(x) = 0\text{,}\) \(r(x) = \frac{1}{x}\text{,}\) \(\alpha_1 = 1\text{,}\) \(\alpha_2 = 0\text{,}\) \(\beta_1 = 1\text{,}\) \(\beta_2 = 0\text{.}\) The problem is not regular. b) \(p(x) = 1+x^2\text{,}\) \(q(x) = x^2\text{,}\) \(r(x) = 1\text{,}\) \(\alpha_1 = 1\text{,}\) \(\alpha_2 = 0\text{,}\) \(\beta_1 = 1\text{,}\) \(\beta_2 = 1\text{.}\) The problem is regular.