1 or 1.5 lectures, §8.4 and §8.5 in [EP], §5.4–§5.7 in [BD]

While behavior of ODEs at singular points is more complicated, certain singular points are not especially difficult to solve. Let us look at some examples before giving a general method. We may be lucky and obtain a power series solution using the method of the previous section, but in general we may have to try other things.

Subsection7.3.1Examples

Example7.3.1

Let us first look at a simple first order equation

\begin{equation*}
2 x y' - y = 0 .
\end{equation*}

Note that \(x=0\) is a singular point. If we only try to plug in

\begin{equation*}
y = \sum_{k=0}^\infty a_k x^k ,
\end{equation*}

First, \(a_0 = 0\text{.}\) Next, the only way to solve \(0 = 2 k a_k - a_k = (2k-1) \, a_k\) for \(k = 1,2,3,\dots\) is for \(a_k = 0\) for all \(k\text{.}\) Therefore we only get the trivial solution \(y=0\text{.}\) We need a nonzero solution to get the general solution.

Let us try \(y=x^r\) for some real number \(r\text{.}\) Consequently our solution—if we can find one—may only make sense for positive \(x\text{.}\) Then \(y' = r x^{r-1}\text{.}\) So

\begin{equation*}
0 = 2 x y' - y = 2 x r x^{r-1} - x^r = (2r-1) x^r .
\end{equation*}

Therefore \(r= \nicefrac{1}{2}\text{,}\) or in other words \(y = x^{1/2}\text{.}\) Multiplying by a constant, the general solution for positive \(x\) is

\begin{equation*}
y = C x^{1/2} .
\end{equation*}

If \(C \not= 0\) then the derivative of the solution “blows up” at \(x=0\) (the singular point). There is only one solution that is differentiable at \(x=0\) and that's the trivial solution \(y=0\text{.}\)

Not every problem with a singular point has a solution of the form \(y=x^r\text{,}\) of course. But perhaps we can combine the methods. What we will do is to try a solution of the form

where \(r\) is a real number, not necessarily an integer. Again if such a solution exists, it may only exist for positive \(x\text{.}\) First let us find the derivatives

This equation is called the indicial equation. This particular indicial equation has a double root at \(r = \nicefrac{1}{2}\text{.}\)

OK, so we know what \(r\) has to be. That knowledge we obtained simply by looking at the coefficient of \(x^r\text{.}\) All other coefficients of \(x^{k+r}\) also have to be zero so

That was lucky! In general, we will not be able to write the series in terms of elementary functions.

We have one solution, let us call it \(y_1 = x^{1/2} e^x\text{.}\) But what about a second solution? If we want a general solution, we need two linearly independent solutions. Picking \(a_0\) to be a different constant only gets us a constant multiple of \(y_1\text{,}\) and we do not have any other \(r\) to try; we only have one solution to the indicial equation. Well, there are powers of \(x\) floating around and we are taking derivatives, perhaps the logarithm (the antiderivative of \(x^{-1}\)) is around as well. It turns out we want to try for another solution of the form

We now differentiate this equation, substitute into the differential equation and solve for \(b_k\text{.}\) A long computation ensues and we obtain some recursion relation for \(b_k\text{.}\) The reader can (and should) try this to obtain for example the first three terms

Here DNE stands for does not exist. The point \(0\) is a singular point, but not a regular singular point.

Let us now discuss the general Method of Frobenius^{ 1 }Named after the German mathematician Ferdinand Georg Frobenius (1849–1917).. Let us only consider the method at the point \(x=0\) for simplicity. The main idea is the following theorem.

has a regular singular point at \(x=0\text{,}\) then there exists at least one solution of the form

\begin{equation*}
y = x^r \sum_{k=0}^\infty a_k x^k .
\end{equation*}

A solution of this form is called a Frobenius-type solution.

The method usually breaks down like this.

We seek a Frobenius-type solution of the form

\begin{equation*}
y = \sum_{k=0}^\infty a_k x^{k+r} .
\end{equation*}

We plug this \(y\) into equation (3). We collect terms and write everything as a single series.

The obtained series must be zero. Setting the first coefficient (usually the coefficient of \(x^r\)) in the series to zero we obtain the indicial equation, which is a quadratic polynomial in \(r\text{.}\)

If the indicial equation has two real roots \(r_1\) and \(r_2\) such that \(r_1 - r_2\) is not an integer, then we have two linearly independent Frobenius-type solutions. Using the first root, we plug in

where we plug \(y_2\) into (3) and solve for the constants \(b_k\) and \(C\text{.}\)

Finally, if the indicial equation has complex roots, then solving for \(a_k\) in the solution

\begin{equation*}
y = x^{r_1} \sum_{k=0}^\infty a_k x^{k}
\end{equation*}

results in a complex-valued function—all the \(a_k\) are complex numbers. We obtain our two linearly independent solutions^{ 2 }See Joseph L. Neuringera, The Frobenius method for complex roots of the indicial equation, International Journal of Mathematical Education in Science and Technology, Volume 9, Issue 1, 1978, 71–77. by taking the real and imaginary parts of \(y\text{.}\)

The main idea is to find at least one Frobenius-type solution. If we are lucky and find two, we are done. If we only get one, we either use the ideas above or even a different method such as reduction of order (Exercise 2.1.8) to obtain a second solution.

Subsection7.3.3Bessel functions

An important class of functions that arises commonly in physics are the Bessel functions^{ 3 }Named after the German astronomer and mathematician Friedrich Wilhelm Bessel (1784–1846).. For example, these functions appear when solving the wave equation in two and three dimensions. First we have Bessel's equation of order \(p\text{:}\)

Therefore we obtain two roots \(r_1 = p\) and \(r_2 = -p\text{.}\) If \(p\) is not an integer following the method of Frobenius and setting \(a_0 = 1\text{,}\) we obtain linearly independent solutions of the form

a) Verify that the indicial equation of Bessel's equation of order \(p\) is \((r-p)(r+p)=0\text{.}\) b) Suppose that \(p\) is not an integer. Carry out the computation to obtain the solutions \(y_1\) and \(y_2\) above.

Bessel functions will be convenient constant multiples of \(y_1\) and \(y_2\text{.}\) First we must define the gamma function

Notice that \(\Gamma(1) = 1\text{.}\) The gamma function also has a wonderful property

\begin{equation*}
\Gamma(x+1) = x \Gamma(x) .
\end{equation*}

From this property, one can show that \(\Gamma(n) = (n-1)!\) when \(n\) is an integer, so the gamma function is a continuous version of the factorial. We compute:

As these are constant multiples of the solutions we found above, these are both solutions to Bessel's equation of order \(p\text{.}\) The constants are picked for convenience.

When \(p\) is not an integer, \(J_p\) and \(J_{-p}\) are linearly independent. When \(n\) is an integer we obtain

and so we do not obtain a second linearly independent solution. The other solution is the so-called Bessel function of second kind. These make sense only for integer orders \(n\) and are defined as limits of linear combinations of \(J_p(x)\) and \(J_{-p}(x)\) as \(p\) approaches \(n\) in the following way:

As each linear combination of \(J_p(x)\) and \(J_{-p}(x)\) is a solution to Bessel's equation of order \(p\text{,}\) then as we take the limit as \(p\) goes to \(n\text{,}\) \(Y_n(x)\) is a solution to Bessel's equation of order \(n\text{.}\) It also turns out that \(Y_n(x)\) and \(J_n(x)\) are linearly independent. Therefore when \(n\) is an integer, we have the general solution to Bessel's equation of order \(n\)

\begin{equation*}
y = A J_n(x) + B Y_n(x) ,
\end{equation*}

for arbitrary constants \(A\) and \(B\text{.}\) Note that \(Y_n(x)\) goes to negative infinity at \(x=0\text{.}\) Many mathematical software packages have these functions \(J_n(x)\) and \(Y_n(x)\) defined, so they can be used just like say \(\sin(x)\) and \(\cos(x)\text{.}\) In fact, they have some similar properties. For example, \(-J_1(x)\) is a derivative of \(J_0(x)\text{,}\) and in general the derivative of \(J_n(x)\) can be written as a linear combination of \(J_{n-1}(x)\) and \(J_{n+1}(x)\text{.}\) Furthermore, these functions oscillate, although they are not periodic. See Figure 8.3.7 for graphs of Bessel functions.

Example7.3.4

Other equations can sometimes be solved in terms of the Bessel functions. For example, given a positive constant \(\lambda\text{,}\)

\begin{equation*}
x y'' + y' + \lambda^2 x y = 0 ,
\end{equation*}

can be changed to \(x^2 y'' + x y' + \lambda^2 x^2 y = 0\text{.}\) Then changing variables \(t = \lambda x\) we obtain via chain rule the equation in \(y\) and \(t\text{:}\)

\begin{equation*}
t^2 y'' + t y' + t^2 y = 0 ,
\end{equation*}

which can be recognized as Bessel's equation of order 0. Therefore the general solution is \(y(t) = A J_0(t) + B Y_0(t)\text{,}\) or in terms of \(x\text{:}\)

\begin{equation*}
y = A J_0(\lambda x) + B Y_0(\lambda x) .
\end{equation*}

This equation comes up for example when finding fundamental modes of vibration of a circular drum, but we digress.

Subsection7.3.4Exercises

Exercise7.3.3

Find a particular (Frobenius-type) solution of \(x^2 y'' + x y' + (1+x) y = 0\text{.}\)

Exercise7.3.4

Find a particular (Frobenius-type) solution of \(x y'' - y = 0\text{.}\)

Exercise7.3.5

Find a particular (Frobenius-type) solution of \(y'' +\frac{1}{x}y' - xy = 0\text{.}\)

Exercise7.3.6

Find the general solution of \(2 x y'' + y' - x^2 y = 0\text{.}\)

Exercise7.3.7

Find the general solution of \(x^2 y'' - x y' -y = 0\text{.}\)

Exercise7.3.8

In the following equations classify the point \(x=0\) as ordinary, regular singular, or singular but not regular singular.

\(x^2(1+x^2)y''+xy=0\)

\(x^2y''+y'+y=0\)

\(xy''+x^3y'+y=0\)

\(xy''+xy'-e^xy=0\)

\(x^2y''+x^2y'+x^2y=0\)

Exercise7.3.101

In the following equations classify the point \(x=0\) as ordinary, regular singular, or singular but not regular singular.