###### Example4.2.1

Define \(f(t) = 1-t^2\) on \([-1,1]\text{.}\) Now extend \(f(t)\) periodically to a 2-periodic function. See Figure 5.2.2.

\(\require{cancel}\newcommand{\nicefrac}[2]{{{}^{#1}}\!/\!{{}_{#2}}}
\newcommand{\unitfrac}[3][\!\!]{#1 \,\, {{}^{#2}}\!/\!{{}_{#3}}}
\newcommand{\unit}[2][\!\!]{#1 \,\, #2}
\newcommand{\noalign}[1]{}
\newcommand{\qed}{\qquad \Box}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\)

As motivation for studying Fourier series, suppose we have the problem

\begin{equation}
x'' + \omega_0^2 x = f(t) ,\label{ts_deq}\tag{6}
\end{equation}

for some periodic function \(f(t)\text{.}\) We already solved

\begin{equation}
x'' + \omega_0^2 x = F_0 \cos ( \omega t) .\label{ts_deqcos}\tag{7}
\end{equation}

One way to solve (6) is to decompose \(f(t)\) as a sum of cosines (and sines) and then solve many problems of the form (7). We then use the principle of superposition, to sum up all the solutions we got to get a solution to (6).

Before we proceed, let us talk a little bit more in detail about periodic functions. A function is said to be *periodic* with period \(P\) if \(f(t) = f(t+P)\) for all \(t\text{.}\) For brevity we say \(f(t)\) is \(P\)-periodic. Note that a \(P\)-periodic function is also \(2P\)-periodic, \(3P\)-periodic and so on. For example, \(\cos (t)\) and \(\sin (t)\) are \(2\pi\)-periodic. So are \(\cos (kt)\) and \(\sin (kt)\) for all integers \(k\text{.}\) The constant functions are an extreme example. They are periodic for any period (exercise).

Normally we start with a function \(f(t)\) defined on some interval \([-L,L]\text{,}\) and we want to *extend \(f(t)\) periodically* to make it a \(2L\)-periodic function. We do this extension by defining a new function \(F(t)\) such that for \(t\) in \([-L,L]\text{,}\) \(F(t) = f(t)\text{.}\) For \(t\) in \([L,3L]\text{,}\) we define \(F(t) = f(t-2L)\text{,}\) for \(t\) in \([-3L,-L]\text{,}\) \(F(t) = f(t+2L)\text{,}\) and so on. To make that work we needed \(f(-L) = f(L)\text{.}\) We could have also started with \(f\) defined only on the half-open interval \((-L,L]\) and then define \(f(-L) = f(L)\text{.}\)

Define \(f(t) = 1-t^2\) on \([-1,1]\text{.}\) Now extend \(f(t)\) periodically to a 2-periodic function. See Figure 5.2.2.

You should be careful to distinguish between \(f(t)\) and its extension. A common mistake is to assume that a formula for \(f(t)\) holds for its extension. It can be confusing when the formula for \(f(t)\) is periodic, but with perhaps a different period.

Define \(f(t) = \cos t\) on \([\nicefrac{-\pi}{2},\nicefrac{\pi}{2}]\text{.}\) Take the \(\pi\)-periodic extension and sketch its graph. How does it compare to the graph of \(\cos t\text{?}\)

Suppose we have a *symmetric matrix*, that is \(A^T = A\text{.}\) As we remarked before, eigenvectors of \(A\) are then orthogonal. Here the word *orthogonal* means that if \(\vec{v}\) and \(\vec{w}\) are two eigenvectors of \(A\) for distinct eigenvalues, then \(\langle \vec{v} , \vec{w} \rangle = 0\text{.}\) In this case the inner product \(\langle \vec{v} , \vec{w} \rangle\) is the *dot product*, which can be computed as \(\vec{v}^T\vec{w}\text{.}\)

To decompose a vector \(\vec{v}\) in terms of mutually orthogonal vectors \(\vec{w}_1\) and \(\vec{w}_2\) we write

\begin{equation*}
\vec{v} = a_1 \vec{w}_1 + a_2 \vec{w}_2 .
\end{equation*}

Let us find the formula for \(a_1\) and \(a_2\text{.}\) First let us compute

\begin{equation*}
\langle \vec{v} , \vec{w_1} \rangle
=
\langle a_1 \vec{w}_1 + a_2 \vec{w}_2 , \vec{w_1} \rangle
=
a_1 \langle \vec{w}_1 , \vec{w_1} \rangle
+
a_2 \underbrace{\langle \vec{w}_2 , \vec{w_1} \rangle}_{=0}
=
a_1 \langle \vec{w}_1 , \vec{w_1} \rangle .
\end{equation*}

Therefore,

\begin{equation*}
a_1 =
\frac{\langle \vec{v} , \vec{w_1} \rangle}{
\langle \vec{w}_1 , \vec{w_1} \rangle} .
\end{equation*}

Similarly

\begin{equation*}
a_2 =
\frac{\langle \vec{v} , \vec{w_2} \rangle}{
\langle \vec{w}_2 , \vec{w_2} \rangle} .
\end{equation*}

You probably remember this formula from vector calculus.

Write \(\vec{v} = \left[ \begin{smallmatrix} 2 \\ 3 \end{smallmatrix} \right]\) as a linear combination of \(\vec{w_1} = \left[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \right]\) and \(\vec{w_2} = \left[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \right]\text{.}\)

First note that \(\vec{w}_1\) and \(\vec{w}_2\) are orthogonal as \(\langle \vec{w}_1 , \vec{w}_2 \rangle = 1(1) + (-1)1 = 0\text{.}\) Then

\begin{equation*}
\begin{aligned}
& a_1 =
\frac{\langle \vec{v} , \vec{w_1} \rangle}{
\langle \vec{w}_1 , \vec{w_1} \rangle}
=
\frac{2(1) + 3(-1)}{1(1) + (-1)(-1)} = \frac{-1}{2} ,
\\
& a_2 =
\frac{\langle \vec{v} , \vec{w_2} \rangle}{
\langle \vec{w}_2 , \vec{w_2} \rangle}
=
\frac{2 + 3}{1 + 1} = \frac{5}{2} .
\end{aligned}
\end{equation*}

Hence

\begin{equation*}
\begin{bmatrix} 2 \\ 3 \end{bmatrix}
=
\frac{-1}{2}
\begin{bmatrix} 1 \\ -1 \end{bmatrix}
+
\frac{5}{2}
\begin{bmatrix} 1 \\ 1 \end{bmatrix} .
\end{equation*}

Instead of decomposing a vector in terms of eigenvectors of a matrix, we decompose a function in terms of eigenfunctions of a certain eigenvalue problem. The eigenvalue problem we use for the Fourier series is

\begin{equation*}
x'' + \lambda x = 0, \quad x(-\pi) = x(\pi), \quad x'(-\pi) = x'(\pi) .
\end{equation*}

We computed that eigenfunctions are 1, \(\cos (k t)\text{,}\) \(\sin (k t)\text{.}\) That is, we want to find a representation of a \(2\pi\)-periodic function \(f(t)\) as

\begin{equation*}
\boxed{~~
f(t) = \frac{a_0}{2} +
\sum_{n=1}^\infty a_n \cos (n t) + b_n \sin (n t) .
~~}
\end{equation*}

This series is called the *Fourier series*^{ 1 }Named after the French mathematician Jean Baptiste Joseph Fourier (1768–1830). or the *trigonometric series* for \(f(t)\text{.}\) We write the coefficient of the eigenfunction 1 as \(\frac{a_0}{2}\) for convenience. We could also think of \(1 = \cos (0t)\text{,}\) so that we only need to look at \(\cos (kt)\) and \(\sin (kt)\text{.}\)

As for matrices we want to find a *projection* of \(f(t)\) onto the subspaces given by the eigenfunctions. So we want to define an *inner product of functions*. For example, to find \(a_n\) we want to compute \(\langle \, f(t) \, , \, \cos (nt) \, \rangle\text{.}\) We define the inner product as

\begin{equation*}
\langle \, f(t)\, , \, g(t) \, \rangle \overset{\text{def}}{=}
\int_{-\pi}^\pi f(t) \, g(t) ~ dt .
\end{equation*}

With this definition of the inner product, we saw in the previous section that the eigenfunctions \(\cos (kt)\) (including the constant eigenfunction), and \(\sin (kt)\) are *orthogonal* in the sense that

\begin{equation*}
\begin{aligned}
\langle \, \cos (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\
\langle \, \sin (mt)\, , \, \sin (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\
\langle \, \sin (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for all } m \text{ and } n .
\end{aligned}
\end{equation*}

By elementary calculus for \(n=1,2,3,\ldots\) we have \(\langle \, \cos (nt) \, , \, \cos (nt) \, \rangle = \pi\) and \(\langle \, \sin (nt) \, , \, \sin (nt) \, \rangle = \pi\text{.}\) For the constant we get that \(\langle \, 1 \, , \, 1 \, \rangle = 2\pi\text{.}\) The coefficients are given by

\begin{equation*}
\boxed{~~
\begin{aligned}
& a_n =
\frac{\langle \, f(t) \, , \, \cos (nt) \, \rangle}{\langle \, \cos (nt) \, , \,
\cos (nt) \, \rangle}
=
\frac{1}{\pi} \int_{-\pi}^\pi f(t) \cos (nt) ~ dt , \\
& b_n =
\frac{\langle \, f(t) \, , \, \sin (nt) \, \rangle}{\langle \, \sin (nt) \, , \,
\sin (nt) \, \rangle}
=
\frac{1}{\pi} \int_{-\pi}^\pi f(t) \sin (nt) ~ dt .
\end{aligned}
~~}
\end{equation*}

Compare these expressions with the finite-dimensional example. For \(a_0\) we get a similar formula

\begin{equation*}
\boxed{~~
a_0 = 2
\frac{\langle \, f(t) \, , \, 1 \, \rangle}{\langle \, 1 \, , \,
1 \, \rangle}
=
\frac{1}{\pi} \int_{-\pi}^\pi f(t) ~ dt .
~~}
\end{equation*}

Let us check the formulas using the orthogonality properties. Suppose for a moment that

\begin{equation*}
f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos (n t) + b_n
\sin (n t) .
\end{equation*}

Then for \(m \geq 1\) we have

\begin{equation*}
\begin{split}
\langle \, f(t)\,,\,\cos (mt) \, \rangle
& =
\Bigl\langle \, \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos (n t) + b_n
\sin (n t) \,,\, \cos (mt) \, \Bigr\rangle \\
& =
\frac{a_0}{2}
\langle \, 1 \, , \, \cos (mt) \, \rangle
+ \sum_{n=1}^\infty
a_n \langle \, \cos (nt) \, , \, \cos (mt) \, \rangle +
b_n \langle \, \sin (n t) \, , \, \cos (mt) \, \rangle \\
& =
a_m \langle \, \cos (mt) \, , \, \cos (mt) \, \rangle .
\end{split}
\end{equation*}

And hence \(a_m = \frac{\langle \, f(t) \, , \, \cos (mt) \, \rangle}{\langle \, \cos (mt) \, , \, \cos (mt) \, \rangle}\text{.}\)

Carry out the calculation for \(a_0\) and \(b_m\text{.}\)

Take the function

\begin{equation*}
f(t) = t
\end{equation*}

for \(t\) in \((-\pi,\pi]\text{.}\) Extend \(f(t)\) periodically and write it as a Fourier series. This function is called the *sawtooth*.

The plot of the extended periodic function is given in Figure 5.2.7. Let us compute the coefficients. We start with \(a_0\text{,}\)

\begin{equation*}
a_0 = \frac{1}{\pi} \int_{-\pi}^\pi t ~dt = 0 .
\end{equation*}

We will often use the result from calculus that says that the integral of an odd function over a symmetric interval is zero. Recall that an *odd function* is a function \(\varphi(t)\) such that \(\varphi(-t) = -\varphi(t)\text{.}\) For example the functions \(t\text{,}\) \(\sin t\text{,}\) or (importantly for us) \(t \cos (nt)\) are all odd functions. Thus

\begin{equation*}
a_n = \frac{1}{\pi} \int_{-\pi}^\pi t \cos (nt) ~dt = 0 .
\end{equation*}

Let us move to \(b_n\text{.}\) Another useful fact from calculus is that the integral of an even function over a symmetric interval is twice the integral of the same function over half the interval. Recall an *even function* is a function \(\varphi(t)\) such that \(\varphi(-t) = \varphi(t)\text{.}\) For example \(t \sin (nt)\) is even.

\begin{equation*}
\begin{split}
b_n & = \frac{1}{\pi} \int_{-\pi}^\pi t \sin (nt) ~dt \\
& = \frac{2}{\pi} \int_{0}^\pi t \sin (nt) ~dt \\
& = \frac{2}{\pi} \left(
\,
\left[ \frac{-t \cos (nt)}{n} \right]_{t=0}^{\pi}
+
\frac{1}{n}
\int_{0}^\pi \cos (nt) ~dt
\right)
\\
& = \frac{2}{\pi} \left(
\,
\frac{-\pi \cos (n\pi)}{n}
+
0
\right) \\
& = \frac{-2 \cos (n\pi)}{n}
= \frac{2 \,{(-1)}^{n+1}}{n} .
\end{split}
\end{equation*}

We have used the fact that

\begin{equation*}
\cos (n\pi) = {(-1)}^n =
\begin{cases}
1 & \text{if } n \text{ even} , \\
-1 & \text{if } n \text{ odd} .
\end{cases}
\end{equation*}

The series, therefore, is

\begin{equation*}
\sum_{n=1}^\infty
\frac{2 \,{(-1)}^{n+1}}{n} \,
\sin (n t) .
\end{equation*}

Let us write out the first 3 harmonics of the series for \(f(t)\text{.}\)

\begin{equation*}
2 \, \sin (t)
- \sin (2t)
+\frac{2}{3} \sin (3t)
+ \cdots
\end{equation*}

The plot of these first three terms of the series, along with a plot of the first 20 terms is given in Figure 5.2.8.

Take the function

\begin{equation*}
f(t) =
\begin{cases}
0 & \text{if } \;{-\pi} < t \leq 0 , \\
\pi & \text{if } \;\phantom{-}0 < t \leq \pi .
\end{cases}
\end{equation*}

Extend \(f(t)\) periodically and write it as a Fourier series. This function or its variants appear often in applications and the function is called the *square wave*.

The plot of the extended periodic function is given in Figure 5.2.10. Now we compute the coefficients. Let us start with \(a_0\)

\begin{equation*}
a_0 = \frac{1}{\pi} \int_{-\pi}^\pi f(t) ~dt
= \frac{1}{\pi} \int_{0}^\pi \pi ~dt = \pi .
\end{equation*}

Next,

\begin{equation*}
a_n = \frac{1}{\pi} \int_{-\pi}^\pi f(t) \cos (nt) ~dt
= \frac{1}{\pi} \int_{0}^\pi \pi \cos (nt) ~dt = 0 .
\end{equation*}

And finally

\begin{equation*}
\begin{split}
b_n & = \frac{1}{\pi} \int_{-\pi}^\pi f(t) \sin (nt) ~dt \\
& = \frac{1}{\pi} \int_{0}^\pi \pi \sin (nt) ~dt \\
& = \left[ \frac{- \cos (nt)}{n} \right]_{t=0}^\pi \\
& = \frac{1 - \cos (\pi n)}{n}
= \frac{1 - {(-1)}^n}{n}
=
\begin{cases}
\frac{2}{n} & \text{if } n \text{ is odd} , \\
0 & \text{if } n \text{ is even} .
\end{cases}
\end{split}
\end{equation*}

The Fourier series is

\begin{equation*}
\frac{\pi}{2} + \sum_{\substack{n=1\\n \text{ odd}}}^\infty
\frac{2}{n} \,
\sin (n t)
=
\frac{\pi}{2} + \sum_{k=1}^\infty
\frac{2}{2k-1} \,
\sin \bigl( (2k-1)\, t \bigr) .
\end{equation*}

Let us write out the first 3 harmonics of the series for \(f(t)\text{.}\)

\begin{equation*}
\frac{\pi}{2}
+
2 \, \sin (t)
+
\frac{2}{3} \, \sin (3t)
+ \cdots
\end{equation*}

The plot of these first three and also of the first 20 terms of the series is given in Figure 5.2.11.

We have so far skirted the issue of convergence. For example, if \(f(t)\) is the square wave function, the equation

\begin{equation*}
f(t) =
\frac{\pi}{2} + \sum_{k=1}^\infty
\frac{2}{2k-1} \,
\sin \bigl( (2k-1)\, t \bigr) .
\end{equation*}

is only an equality for such \(t\) where \(f(t)\) is continuous. That is, we do not get an equality for \(t=-\pi,0,\pi\) and all the other discontinuities of \(f(t)\text{.}\) It is not hard to see that when \(t\) is an integer multiple of \(\pi\) (which includes all the discontinuities), then

\begin{equation*}
\frac{\pi}{2} + \sum_{k=1}^\infty
\frac{2}{2k-1} \,
\sin \bigl( (2k-1)\, t \bigr) = \frac{\pi}{2} .
\end{equation*}

We redefine \(f(t)\) on \([-\pi,\pi]\) as

\begin{equation*}
f(t) =
\begin{cases}
0 & \text{if } \; {-\pi} < t < 0 , \\
\pi & \text{if } \; \phantom{-}0 < t < \pi , \\
\nicefrac{\pi}{2} & \text{if } \; \phantom{-}t = -\pi,
t = 0,\text{ or }
t = \pi,
\end{cases}
\end{equation*}

and extend periodically. The series equals this extended \(f(t)\) everywhere, including the discontinuities. We will generally not worry about changing the function values at several (finitely many) points.

We will say more about convergence in the next section. Let us however mention briefly an effect of the discontinuity. Let us zoom in near the discontinuity in the square wave. Further, let us plot the first 100 harmonics, see Figure 5.2.12. While the series is a very good approximation away from the discontinuities, the error (the overshoot) near the discontinuity at \(t=\pi\) does not seem to be getting any smaller. This behavior is known as the *Gibbs phenomenon*. The region where the error is large does get smaller, however, the more terms in the series we take.

We can think of a periodic function as a “signal” being a superposition of many signals of pure frequency. For example, we could think of the square wave as a tone of certain base frequency. This base frequency is called the *fundamental frequency*. The square wave will be a superposition of many different pure tones of frequencies that are multiples of the fundamental frequency. In music, the higher frequencies are called the *overtones*. All the frequencies that appear are called the *spectrum* of the signal. On the other hand a simple sine wave is only the pure tone (no overtones). The simplest way to make sound using a computer is the square wave, and the sound is very different from a pure tone. If you ever played video games from the 1980s or so, then you heard what square waves sound like.

Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(\sin (5t) + \cos (3t)\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)

Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(\lvert t \rvert\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)

Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(\lvert t \rvert^3\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)

Suppose \(f(t)\) is defined on \((-\pi,\pi]\) as

\begin{equation*}
f(t) =
\begin{cases}
-1 & \text{if } \; {-\pi} < t \leq 0 , \\
1 & \text{if } \; \phantom{-}0 < t \leq \pi .
\end{cases}
\end{equation*}

Extend periodically and compute the Fourier series of \(f(t)\text{.}\)

Suppose \(f(t)\) is defined on \((-\pi,\pi]\) as \(t^3\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)

Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(t^2\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)

There is another form of the Fourier series using complex exponentials that is sometimes easier to work with.

Let

\begin{equation*}
f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos (n t)
+ b_n \sin (n t) .
\end{equation*}

Use Euler's formula \(e^{i\theta} = \cos (\theta) + i \sin (\theta)\) to show that there exist complex numbers \(c_m\) such that

\begin{equation*}
f(t) =
\sum_{m=-\infty}^\infty c_m e^{imt} .
\end{equation*}

Note that the sum now ranges over all the integers including negative ones. Do not worry about convergence in this calculation. Hint: It may be better to start from the complex exponential form and write the series as

\begin{equation*}
c_0 + \sum_{m=1}^\infty \Bigl( c_m e^{imt} + c_{-m} e^{-imt} \Bigr).
\end{equation*}

Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(f(t) = \sin(t)\text{.}\) Extend periodically and compute the Fourier series.

Answer

\(\sin(t)\)

Suppose \(f(t)\) is defined on \((-\pi,\pi]\) as \(f(t) = \sin(\pi t)\text{.}\) Extend periodically and compute the Fourier series.

Answer

\(\sum\limits_{n=1}^\infty \frac{(\pi-n) \sin( \pi n+{\pi}^{2}) +(\pi+n)\sin(\pi n-{\pi}^{2}) }{\pi {n}^{2}-{\pi}^{3}} \sin(nt)\)

Suppose \(f(t)\) is defined on \((-\pi,\pi]\) as \(f(t) = \sin^2(t)\text{.}\) Extend periodically and compute the Fourier series.

Answer

\(\frac{1}{2}-\frac{1}{2}\cos(2t)\)

Suppose \(f(t)\) is defined on \((-\pi,\pi]\) as \(f(t) = t^4\text{.}\) Extend periodically and compute the Fourier series.

Answer

\(\frac{\pi^4}{5} + \sum\limits_{n=1}^\infty \frac{{(-1)}^{n} (8{\pi}^{2}{n}^{2}-48) }{{n}^{4}} \cos(nt)\)