Skip to main content

Section 11.8 Fourier series

Note: 3–4 lectures

Fourier series 1  is perhaps the most important (and most difficult to understand) of the series that we cover in this book. We have seen it in a few examples before, but let us start at the beginning.

Subsection 11.8.1 Trigonometric polynomials

A trigonometric polynomial is an expression of the form

\begin{equation*} a_0 + \sum_{n=1}^N \bigl(a_n \cos(nx) + b_n \sin(nx) \bigr), \end{equation*}

or equivalently, thanks to Euler's formula (\(e^{i\theta} = \cos(\theta) + i \sin(\theta)\)):

\begin{equation*} \sum_{n=-N}^N c_n e^{inx} . \end{equation*}

The second form is usually more convenient. If \(z \in \C\) with \(\sabs{z}=1,\) we write \(z = e^{ix}\text{,}\) and so

\begin{equation*} \sum_{n=-N}^N c_n e^{inx} = \sum_{n=-N}^N c_n z^n . \end{equation*}

So a trigonometric polynomial is really a rational function of the complex variable \(z\) (we are allowing negative powers) evaluated on the unit circle. There is a wonderful connection between power series (actually Laurent series because of the negative powers) and Fourier series because of this observation, but we will not investigate this further.

Another reason why Fourier series are important and come up in so many applications is that the functions are eigenfunctions 3  of various differential operators. For example,

\begin{equation*} \frac{d}{dx} \bigl[ e^{ikx} \bigr] = (ik) e^{ikx}, \qquad \frac{d^2}{dx^2} \bigl[ e^{ikx} \bigr] = (-k^2) e^{ikx} . \end{equation*}

That is, they are the functions whose derivative is a scalar (the eigenvalue) times itself. Just as eigenvalues and eigenvectors are important in studying matrices, eigenvalues and eigenfunctions are important when studying linear differential equations.

The functions \(\cos (nx)\text{,}\) \(\sin (nx)\text{,}\) and \(e^{inx}\) are \(2\pi\)-periodic and hence trigonometric polynomials are also \(2\pi\)-periodic. We could rescale \(x\) to make the period different, but the theory is the same, so let us stick with the period of \(2\pi\text{.}\) The antiderivative of \(e^{inx}\) is \(\frac{e^{inx}}{in}\) and so

\begin{equation*} \int_{-\pi}^\pi e^{inx} \, dx = \begin{cases} 2\pi & \text{if } n=0, \\ 0 & \text{otherwise.} \end{cases} \end{equation*}

Consider

\begin{equation*} f(x) := \sum_{n=-N}^N c_n e^{inx} , \end{equation*}

and for \(m=-N,\ldots,N\) compute

\begin{equation*} \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-imx} \, dx = \frac{1}{2\pi} \int_{-\pi}^\pi \left(\sum_{n=-N}^N c_n e^{i(n-m)x}\right) \, dx = \sum_{n=-N}^N c_n \frac{1}{2\pi} \int_{-\pi}^\pi e^{i(n-m)x} \, dx = c_m . \end{equation*}

We just found a way of computing the coefficients \(c_m\) using an integral of \(f\text{.}\) If \(\sabs{m} > N\text{,}\) the integral is just 0: We might as well have included enough zero coefficients to make \(\sabs{m} \leq N\text{.}\)

Proof.

If \(f(x)\) is real-valued, that is \(\overline{f(x)} = f(x)\text{,}\) then

\begin{equation*} \overline{c_m} = \overline{ \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-imx} \, dx } = \frac{1}{2\pi} \int_{-\pi}^\pi \overline{ f(x) e^{-imx} } \, dx = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{imx} \, dx = c_{-m} . \end{equation*}

The complex conjugate goes inside the integral because the integral is done on real and imaginary parts separately.

On the other hand if \(c_{-m} = \overline{c_m}\text{,}\) then

\begin{equation*} \overline{c_{-m}\, e^{-imx}+ c_{m}\, e^{imx}} = \overline{c_{-m}}\, e^{imx}+ \overline{c_{m}}\, e^{-imx} = c_{m}\, e^{imx}+ c_{-m}\, e^{-imx} , \end{equation*}

which is real valued. Also \(c_0 = \overline{c_0}\text{,}\) so \(c_0\) is real. By pairing up the terms we obtain that \(f\) has to be real-valued.

The functions \(e^{inx}\) are also linearly independent.

Proof.

The result follows immediately from the integral formula for \(c_n\text{.}\)

Subsection 11.8.2 Fourier series

We now take limits. We call the series

\begin{equation*} \sum_{n=-\infty}^\infty c_n \, e^{inx} \end{equation*}

the Fourier series. The numbers \(c_n\) are called Fourier coefficients. Using Euler's formula \(e^{i\theta} = \cos(\theta) + i \sin (\theta)\text{,}\) we could also develop everything with sines and cosines, that is, as the series \(a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)\text{,}\) but it is equivalent and slightly more messy.

Several questions arise. What functions are expressible as Fourier series? Obviously, they have to be \(2\pi\)-periodic, but not every periodic function is expressible with the series. Furthermore, if we do have a Fourier series, where does it converge (where and if at all)? Does it converge absolutely? Uniformly? Also note that the series has two limits. When talking about Fourier series convergence, we often talk about the following limit:

\begin{equation*} \lim_{N\to\infty} \sum_{n=-N}^N c_n e^{inx} . \end{equation*}

There are other ways we can sum the series that can get convergence in more situations, but we refrain from discussing those.

Conversely, we start with an integrable function \(f \colon [-\pi,\pi] \to \C\text{,}\) and we call the numbers

\begin{equation*} c_n := \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-inx} \, dx \end{equation*}

its Fourier coefficients. Often these numbers are written as \(\hat{f}(n)\text{.}\) 4  We then formally write down a Fourier series. As you might imagine such a series might not even converge. We write

\begin{equation*} f(x) \sim \sum_{n=-\infty}^\infty c_n \, e^{inx} , \end{equation*}

although the \(\sim\) doesn't imply anything about the two sides being equal in any way. It is simply that we created a formal series using the formula for the coefficients.

A few sections ago, we proved that the Fourier series

\begin{equation*} \sum_{n=1}^\infty \frac{\sin(nx)}{n^2} \end{equation*}

converges uniformly and hence converges to a continuous function. This example and its proof can be extended to a more general criterion.

The proof is to apply the Weierstrass \(M\)-test (Theorem 11.2.4) and the \(p\)-series test, to find that the series converges uniformly and hence to a continuous function (Corollary 11.2.8). We can also take derivatives.

The trick is to first notice that the series converges first to a continuous function by the previous proposition, so in particular it converges at some point. Then differentiate the partial sums

\begin{equation*} \sum_{n=-N}^{N} i n c_n \,e^{inx} \end{equation*}

and notice that for all nonzero \(n\)

\begin{equation*} \sabs{i n c_n} \leq \frac{C}{\sabs{n}^{\alpha-1}} . \end{equation*}

The differentiated series converges uniformly by the \(M\)-test again. Since the differentiated series converges uniformly, we find that the original series \(\sum c_n\,e^{inx}\) converges to a continuously differentiable function, whose derivative is the differentiated series (see Theorem 11.2.14).

We can iterate the same reasoning. Suppose there is some \(C\) and \(\alpha > k+1\) (\(k \in \N\)) such that

\begin{equation*} \sabs{c_n} \leq \frac{C}{\sabs{n}^\alpha} \end{equation*}

for all nonzero integers \(n\text{.}\) Then the Fourier series converges to a \(k\)-times continuously differentiable function. Therefore, the faster the coefficients go to zero, the more regular the limit is.

Subsection 11.8.3 Orthonormal systems

Let us abstract away some of the properties of the exponentials, and study a more general series for a function. One fundamental property of the exponentials that make Fourier series what it is that the exponentials are a so-called orthonormal system. Let us fix an interval \([a,b]\text{.}\) We define an inner product for the space of functions. We restrict our attention to Riemann integrable functions since we do not have the Lebesgue integral, which would be the natural choice. Let \(f\) and \(g\) be complex-valued Riemann integrable functions on \([a,b]\) and define the inner product

\begin{equation*} \langle f , g \rangle := \int_a^b f(x) \overline{g(x)} \, dx . \end{equation*}

If you have seen Hermitian inner products in linear algebra, this is precisely such a product. We have to put in the conjugate as we are working with complex numbers. We then have the “size,” that is the \(L^2\) norm \(\snorm{f}_2\) by (defining the square)

\begin{equation*} \snorm{f}_2^2 := \langle f , f \rangle = \int_a^b \sabs{f(x)}^2 \, dx . \end{equation*}

Remark 11.8.5.

Notice the similarity to finite dimensions. For \(z = (z_1,z_2,\ldots,z_n) \in \C^n\text{,}\) we define

\begin{equation*} \langle z , w \rangle := \sum_{k=1}^n z_k \overline{w_k} . \end{equation*}

Then the norm is (usually denoted by simply \(\snorm{z}\) in \(\C^n\) rather than \(\snorm{z}_2\))

\begin{equation*} \snorm{z}^2 = \langle z , z \rangle = \sum_{k=1}^n \sabs{z_k}^2 . \end{equation*}

This is just the euclidean distance to the origin in \(\C^n\) (same as \(\R^{2n}\)).

Let us get back to function spaces. In what follows, we will assume all functions are Riemann integrable.

Definition 11.8.6.

Let \(\{ \varphi_n \}\) be a sequence of integrable complex-valued functions on \([a,b]\text{.}\) We say that this is an orthonormal system if

\begin{equation*} \langle \varphi_n , \varphi_m \rangle = \int_a^b \varphi_n(x) \, \overline{\varphi_m(x)} \, dx = \begin{cases} 1 & \text{if } n=m, \\ 0 & \text{otherwise.} \end{cases} \end{equation*}

In particular, \(\snorm{\varphi_n}_2 = 1\) for all \(n\text{.}\) If we only require that \(\langle \varphi_n , \varphi_m \rangle = 0\) for \(m\not= n\text{,}\) then the system would be called an orthogonal system.

We noticed above that

\begin{equation*} \left\{ \frac{1}{\sqrt{2\pi}} \, e^{inx} \right\} \end{equation*}

is an orthonormal system. The factor out in front is to make the norm be 1.

Having an orthonormal system \(\{ \varphi_n \}\) on \([a,b]\) and an integrable function \(f\) on \([a,b]\text{,}\) we can write a Fourier series relative to \(\{ \varphi_n \}\text{.}\) We let

\begin{equation*} c_n := \langle f , \varphi_n \rangle = \int_a^b f(x) \overline{\varphi_n(x)} \, dx , \end{equation*}

and write

\begin{equation*} f(x) \sim \sum_{n=1}^\infty c_n \varphi_n . \end{equation*}

In other words, the series is

\begin{equation*} \sum_{n=1}^\infty \langle f , \varphi_n \rangle \varphi_n(x) . \end{equation*}

Notice the similarity to the expression for the orthogonal projection of a vector onto a subspace from linear algebra. We are in fact doing just that, but in a space of functions.

In other words, the partial sums of the Fourier series are the best approximation with respect to the \(L^2\) norm.

Proof.

Let us write

\begin{equation*} \int_a^b \sabs{f-p_n}^2 = \int_a^b \sabs{f}^2 - \int_a^b f \widebar{p_n} - \int_a^b \widebar{f} p_n + \int_a^b \sabs{p_n}^2 . \end{equation*}

Now

\begin{equation*} \int_a^b f \widebar{p_n} = \int_a^b f \sum_{k=1}^n \overline{d_k} \overline{\varphi_k} = \sum_{k=1}^n \overline{d_k} \int_a^b f \, \overline{\varphi_k} = \sum_{k=1}^n \overline{d_k} c_k , \end{equation*}

and

\begin{equation*} \int_a^b \sabs{p_n}^2 = \int_a^b \sum_{k=1}^n d_k \varphi_k \sum_{j=1}^n \overline{d_j} \overline{\varphi_j} = \sum_{k=1}^n \sum_{j=1}^n d_k \overline{d_j} \int_a^b \varphi_k \overline{\varphi_j} = \sum_{k=1}^n \sabs{d_k}^2 . \end{equation*}

So

\begin{equation*} \int_a^b \sabs{f-p_n}^2 = \int_a^b \sabs{f}^2 - \sum_{k=1}^n \overline{d_k} c_k - \sum_{k=1}^n d_k \overline{c_k} + \sum_{k=1}^n \sabs{d_k}^2 = \int_a^b \sabs{f}^2 - \sum_{k=1}^n \sabs{c_k}^2 + \sum_{k=1}^n \sabs{d_k-c_k}^2 . \end{equation*}

This is minimized precisely when \(d_k = c_k\text{.}\)

When we do plug in \(d_k = c_k\text{,}\) then

\begin{equation*} \int_a^b \sabs{f-s_n}^2 = \int_a^b \sabs{f}^2 - \sum_{k=1}^n \sabs{c_k}^2 \end{equation*}

and so

\begin{equation*} \sum_{k=1}^n \sabs{c_k}^2 \leq \int_a^b \sabs{f}^2 \end{equation*}

for all \(n\text{.}\) Note that

\begin{equation*} \sum_{k=1}^n \sabs{c_k}^2 = \snorm{s_n}_2^2 \end{equation*}

by the calculation above. We take a limit to obtain the so-called Bessel's inequality.

In particular (given that a Riemann integrable function satisfies \(\int_a^b \sabs{f}^2 < \infty\)), we get that the series converges and hence

\begin{equation*} \lim_{k \to \infty} c_k = 0 . \end{equation*}

Subsection 11.8.4 The Dirichlet kernel and approximate delta functions

Let us return to the trigonometric Fourier series. Here we note that the system \(\{ e^{inx} \}\) is orthogonal, but not orthonormal if we simply integrate over \([-\pi,\pi]\text{.}\) We can also rescale the integral and hence the inner product to make \(\{ e^{inx} \}\) orthonormal. That is, if we replace

\begin{equation*} \int_a^b \qquad \text{with} \qquad \frac{1}{2\pi} \int_{-\pi}^\pi, \end{equation*}

(we are just rescaling the \(dx\) really) 7 , then everything works and we obtain that the system \(\{ e^{inx} \}\) is orthonormal with respect to the inner product

\begin{equation*} \langle f , g \rangle = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) \, \overline{g(x)} \, dx . \end{equation*}

Suppose \(f \colon \R \to \C\) is \(2\pi\)-periodic and integrable on \([-\pi,\pi]\text{.}\) Let

\begin{equation*} c_n := \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-inx} \, dx . \end{equation*}

Write

\begin{equation*} f(x) \sim \sum_{n=-\infty}^\infty c_n \,e^{inx} . \end{equation*}

Define the symmetric partial sums

\begin{equation*} s_N(f;x) := \sum_{n=-N}^N c_n \,e^{inx} . \end{equation*}

The inequality leading up to Bessel now reads:

\begin{equation*} \frac{1}{2\pi} \int_{-\pi}^\pi \sabs{s_N(f;x)}^2 \, dx = \sum_{n=-N}^N \sabs{c_n}^2 \leq \frac{1}{2\pi} \int_{-\pi}^\pi \sabs{f(x)}^2 \, dx . \end{equation*}

The Dirichlet kernel is the sum

\begin{equation*} D_N(x) := \sum_{n=-N}^N e^{inx} . \end{equation*}

We claim that

\begin{equation*} D_N(x) = \sum_{n=-N}^N e^{inx} = \frac{\sin\bigl( (N+\nicefrac{1}{2})x \bigr)}{\sin(\nicefrac{x}{2})} , \end{equation*}

at least for \(x\) such that \(\sin(\nicefrac{x}{2}) \not= 0\text{.}\) We know that the left-hand side is continuous and hence the right-hand side extends continuously to all of \(\R\) as well. To show the claim we use a familiar trick:

\begin{equation*} (e^{ix}-1) D_N(x) = e^{i(N+1)x} - e^{-iNx} . \end{equation*}

Multiply by \(e^{-ix/2}\)

\begin{equation*} (e^{ix/2}-e^{-ix/2}) D_N(x) = e^{i(N+\nicefrac{1}{2})x} - e^{-i(N+\nicefrac{1}{2})x} . \end{equation*}

The claim follows.

We expand the definition of \(s_N\)

\begin{multline*} s_N(f;x) = \sum_{n=-N}^N \frac{1}{2\pi} \int_{-\pi}^\pi f(t) e^{-int} \, dt ~ e^{inx} \\ = \frac{1}{2\pi} \int_{-\pi}^\pi f(t) \sum_{n=-N}^N e^{in(x-t)} \, dt = \frac{1}{2\pi} \int_{-\pi}^\pi f(t) D_N(x-t) \, dt . \end{multline*}

If you replace \(x-t\) with \(t-x\) (\(D_N\) is even), we see that convolution strikes again! As \(D_N\) and \(f\) are \(2\pi\)-periodic, we may also change variables and write

\begin{equation*} s_N(f;x) = \frac{1}{2\pi} \int_{x-\pi}^{x+\pi} f(x-t) D_N(t) \, dt = \frac{1}{2\pi} \int_{-\pi}^\pi f(x-t) D_N(t) \, dt . \end{equation*}

See Figure 11.10 for a plot of \(D_N\) for \(N=5\) and \(N=20\text{.}\)


Figure 11.10. Plot of \(D_N(x)\) for \(N=5\) (gray) and \(N=20\) (black).

The central peak gets taller and taller as \(N\) gets larger, and the side peaks stay small. We are convolving (again) with approximate delta functions, although these functions have all these oscillations away from zero. The oscillations on the side do not go away but they are eventually so fast that we expect the integral to just sort of cancel itself out there. Overall, we expect that \(s_N(f)\) goes to \(f\text{.}\) Things are not always simple, but under some conditions on \(f\text{,}\) such a conclusion holds. For this reason people write

\begin{equation*} 2\pi \, \delta(x) \sim \sum_{n=\infty}^\infty e^{inx} , \end{equation*}

where \(\delta\) is the “delta function” (not really a function), which is an object that will give something like “\(\int_{-\pi}^{\pi} f(x-t) \delta(t) \, dt = f(x)\text{.}\)” We can think of \(D_N(x)\) converging in some sense to \(2 \pi\, \delta(x)\text{.}\) However, we have not defined (and will not define) what kind of an object the delta function is, nor what does it mean for it to be a limit of \(D_N\) or have a Fourier series.

Subsection 11.8.5 Localization

If \(f\) satisfies a Lipschitz condition at a point, then the Fourier series converges at that point.

In particular, if \(f\) is continuously differentiable at \(x\text{,}\) then we obtain convergence (exercise). We state an often used version of this corollary. A function \(f \colon [a,b] \to \C\) is continuous piecewise smooth if it is continuous and there exist points \(x_0 = a < x_1 < x_2 < \cdots < x_k = b\) such that \(f\) restricted to \([x_j,x_{j+1}]\) is continuously differentiable (up to the endpoints) for all \(j\text{.}\)

The proof of the corollary is left as an exercise. Let us prove the theorem.

Proof.

(Proof of Theorem 11.8.9) For all \(N\text{,}\)

\begin{equation*} \frac{1}{2\pi} \int_{-\pi}^\pi D_N = 1 . \end{equation*}

Write

\begin{equation*} \begin{split} s_N(f;x)-f(x) & = \frac{1}{2\pi} \int_{-\pi}^\pi f(x-t) D_N(t) \, dt - f(x) \frac{1}{2\pi} \int_{-\pi}^\pi D_N(t) \, dt \\ & = \frac{1}{2\pi} \int_{-\pi}^\pi \bigl( f(x-t) - f(x) \bigr) D_N(t) \, dt \\ & = \frac{1}{2\pi} \int_{-\pi}^\pi \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \sin\bigl( (N+\nicefrac{1}{2})t \bigr) \, dt . \end{split} \end{equation*}

By the hypotheses, for small nonzero \(t\) we get

\begin{equation*} \abs{ \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} } \leq \frac{M\sabs{t}}{\sabs{\sin(\nicefrac{t}{2})}} . \end{equation*}

As \(\sin(\theta) = \theta + h(\theta)\) where \(\frac{h(\theta)}{\theta} \to 0\) as \(\theta \to 0\text{,}\) we notice that \(\frac{M\sabs{t}}{\sabs{\sin(\nicefrac{t}{2})}}\) is continuous at the origin and hence \(\frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})}\) must be bounded near the origin. As \(t=0\) is the only place on \([-\pi,\pi]\) where the denominator vanishes, it is the only place where there could be a problem. The function is also Riemann integrable. We use a trigonometric identity

\begin{equation*} \sin\bigl( (N+\nicefrac{1}{2})t \bigr) = \cos(\nicefrac{t}{2}) \sin(Nt) + \sin(\nicefrac{t}{2}) \cos(Nt) , \end{equation*}

so

\begin{multline*} \frac{1}{2\pi} \int_{-\pi}^\pi \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \sin\bigl( (N+\nicefrac{1}{2})t \bigr) \, dt = \\ \frac{1}{2\pi} \int_{-\pi}^\pi \left( \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \cos (\nicefrac{t}{2}) \right) \sin (Nt) \, dt + \frac{1}{2\pi} \int_{-\pi}^\pi \bigl( f(x-t) - f(x) \bigr) \cos (Nt) \, dt . \end{multline*}

Now \(\frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \cos (\nicefrac{t}{2})\) and \(\bigl( f(x-t) - f(x) \bigr)\) are bounded Riemann integrable functions and so their Fourier coefficients go to zero by Theorem 11.8.8. So the two integrals on the right-hand side, which compute the Fourier coefficients for the real version of the Fourier series go to 0 as \(N\) goes to infinity. This is because \(\sin(Nt)\) and \(\cos(Nt)\) are also orthonormal systems with respect to the same inner product. Hence \(s_N(f;x)-f(x)\) goes to 0, that is, \(s_N(f;x)\) goes to \(f(x)\text{.}\)

The theorem also says that convergence depends only on local behavior.

That is, convergence at \(x\) is only dependent on the values of the function near \(x\text{.}\) To prove the first claim, take \(M=0\) in the theorem. The “In particular” follows by considering the function \(f-g\text{,}\) which is zero on \(J\) and \(s_N(f-g) = s_N(f) - s_N(g)\text{.}\) On the other hand, we have seen that the rate of convergence, that is how fast does \(s_N(f)\) converge to \(f\text{,}\) depends on global behavior of the function.

There is a subtle difference between the corollary and what can be achieved by the Stone–Weierstrass theorem. Any continuous function on \([-\pi,\pi]\) can be uniformly approximated by trigonometric polynomials, but these trigonometric polynomials need not be the partial sums \(s_N\text{.}\)

Subsection 11.8.6 Parseval's theorem

Finally, convergence always happens in the \(L^2\) sense and operations on the (infinite) vectors of Fourier coefficients are the same as the operations using the integral inner product.

Proof.

There exists (exercise) a continuous \(2\pi\)-periodic function \(h\) such that

\begin{equation*} \snorm{f-h}_2 < \epsilon . \end{equation*}

Via Stone–Weierstrass, approximate \(h\) with a trigonometric polynomial uniformly. That is, there is a trigonometric polynomial \(P(x)\) such that \(\sabs{h(x) - P(x)} < \epsilon\) for all \(x\text{.}\) Hence

\begin{equation*} \snorm{h-P}_2 = \sqrt{ \frac{1}{2\pi} \int_{-\pi}^{\pi} \sabs{h(x)-P(x)}^2 \, dx } \leq \epsilon. \end{equation*}

If \(P\) is of degree \(N_0\text{,}\) then for all \(N \geq N_0\)

\begin{equation*} \snorm{h-s_N(h)}_2 \leq \snorm{h-P}_2 \leq \epsilon , \end{equation*}

as \(s_N(h)\) is the best approximation for \(h\) in \(L^2\) (Theorem 11.8.7). By the inequality leading up to Bessel, we have

\begin{equation*} \snorm{s_N(h)-s_N(f)}_2 = \snorm{s_N(h-f)}_2 \leq \snorm{h-f}_2 \leq \epsilon . \end{equation*}

The \(L^2\) norm satisfies the triangle inequality (exercise). Thus, for all \(N \geq N_0\text{,}\)

\begin{equation*} \snorm{f-s_N(f)}_2 \leq \snorm{f-h}_2 + \snorm{h-s_N(h)}_2 + \snorm{s_N(h)-s_N(f)}_2 \leq 3\epsilon . \end{equation*}

Hence, the first claim follows.

Next,

\begin{equation*} \langle s_N(f) , g \rangle = \frac{1}{2\pi} \int_{-\pi}^\pi s_N(f;x) \overline{g(x)} \, dx = \sum_{k=-N}^N c_k \frac{1}{2\pi} \int_{-\pi}^\pi e^{ikx} \overline{g(x)} \, dx = \sum_{k=-N}^N c_k \overline{d_k} . \end{equation*}

We need the Schwarz (or Cauchy–Schwarz or Cauchy–Bunyakovsky–Schwarz) inequality, that is,

\begin{equation*} {\abs{\int_a^b f\bar{g}}}^2 \leq \left( \int_a^b \sabs{f}^2 \right) \left( \int_a^b \sabs{g}^2 \right) . \end{equation*}

This is left as an exercise. The proof is not really different from the finite-dimensional version. So

\begin{equation*} \begin{split} \abs{\int_{-\pi}^\pi f\bar{g} - \int_{-\pi}^\pi s_N(f)\bar{g}} & = \abs{\int_{-\pi}^\pi (f- s_N(f))\bar{g}} \\ & \leq {\left(\int_{-\pi}^\pi \sabs{f- s_N(f)}^2 \right)}^{1/2} {\left( \int_{-\pi}^\pi \sabs{g}^2 \right)}^{1/2} . \end{split} \end{equation*}

The right-hand side goes to 0 as \(N\) goes to infinity by the first claim of the theorem. That is, as \(N\) goes to infinity, \(\langle s_N(f),g \rangle\) goes to \(\langle f,g \rangle\text{,}\) and the second claim is proved. The last claim in the theorem follows by using \(g=f\text{.}\)

Subsection 11.8.7 Exercises

Exercise 11.8.1.

Consider the Fourier series

\begin{equation*} \sum_{n=1}^\infty \frac{1}{2^n} \sin(2^n x) . \end{equation*}

Show that the series converges uniformly and absolutely to a continuous function. Note: This is another example of a nowhere differentiable function (you do not have to prove that) 10 . See Figure 11.11.


Figure 11.11. Plot of \(\sum_{n=1}^\infty \frac{1}{2^n} \sin(2^n x)\text{.}\)

Exercise 11.8.2.

Suppose that a \(2\pi\)-periodic function that is Riemann integrable on \([-\pi,\pi]\text{,}\) and such that \(f\) is continuously differentiable on some open interval \((a,b)\text{.}\) Prove that for every \(x \in (a,b)\text{,}\) we have \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\)

Exercise 11.8.3.

Prove Corollary 11.8.10, that is, suppose a \(2\pi\)-periodic function is continuous piecewise smooth near a point \(x\text{,}\) then \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\) Hint: See the previous exercise.

Exercise 11.8.4.

Given a \(2\pi\)-periodic function \(f \colon \R \to \C\) Riemann integrable on \([-\pi,\pi]\text{,}\) and \(\epsilon > 0\text{.}\) Show that there exists a continuous \(2\pi\)-periodic function \(g \colon \R \to \C\) such that \(\snorm{f-g}_2 < \epsilon\text{.}\)

Exercise 11.8.5.

Prove the Cauchy–Bunyakovsky–Schwarz inequality for Riemann integrable functions:

\begin{equation*} {\abs{\int_a^b f\bar{g}}}^2 \leq \left( \int_a^b \sabs{f}^2 \right) \left( \int_a^b \sabs{g}^2 \right) . \end{equation*}

Exercise 11.8.6.

Prove the \(L^2\) triangle inequality for Riemann integrable functions on \([-\pi,\pi]\text{:}\)

\begin{equation*} \snorm{f+g}_2 \leq \snorm{f}_2 + \snorm{g}_2 . \end{equation*}

Exercise 11.8.7.

Suppose for some \(C\) and \(\alpha > 1\text{,}\) we have a real sequence \(\{ a_n \}\) with \(\abs{a_n} \leq \frac{C}{n^\alpha}\) for all \(n\text{.}\) Let

\begin{equation*} g(x) := \sum_{n=1}^\infty a_n \sin(n x) . \end{equation*}
  1. Show that \(g\) is continuous.

  2. Formally (that is, suppose you can differentiate under the sum) find a solution (formal solution, that is, do not yet worry about convergence) to the differential equation

    \begin{equation*} y''+ 2 y = g(x) \end{equation*}

    of the form

    \begin{equation*} y(x) = \sum_{n=1}^\infty b_n \sin(n x) . \end{equation*}
  3. Then show that this solution \(y\) is twice continuously differentiable, and in fact solves the equation.

Exercise 11.8.8.

Let \(f\) be a \(2\pi\)-periodic function such that \(f(x) = x\) for \(0 < x < 2\pi\text{.}\) Use Parseval's theorem to find

\begin{equation*} \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} . \end{equation*}

Exercise 11.8.9.

Suppose that \(c_n = 0\) for all \(n < 0\) and \(\sum_{n=0}^\infty \sabs{c_n}\) converges. Let \(\D := B(0,1) \subset \C\) be the unit disc, and \(\overline{\D} = C(0,1)\) be the closed unit disc. Show that there exists a continuous function \(f \colon \overline{\D} \to \C\) that is analytic on \(\D\) and such that on the boundary of \(\D\) we have \(f(e^{i\theta}) = \sum_{n=0}^\infty c_n e^{i\theta}\text{.}\)
Hint: If \(z=re^{i\theta}\text{,}\) then \(z^n = r^n e^{in\theta}\text{.}\)

Exercise 11.8.10.

Show that

\begin{equation*} \sum_{n=1}^\infty e^{-1/n} \sin(n x) \end{equation*}

converges to an infinitely differentiable function.

Exercise 11.8.11.

Let \(f\) be a \(2\pi\)-periodic function such that \(f(x) = f(0) + \int_0^x g\) for a function \(g\) that is Riemann integrable on every interval. Suppose

\begin{equation*} f(x) \sim \sum_{n=-\infty}^\infty c_n \,e^{inx} . \end{equation*}

Show that there exists a \(C > 0\) such that \(\sabs{c_n} \leq \frac{C}{\sabs{n}}\text{.}\)

Exercise 11.8.12.

  1. Let \(\varphi\) be the \(2\pi\)-periodic function defined by \(\varphi(x) := 0\) if \(x \in (-\pi,0)\text{,}\) and \(\varphi(x) := 1\) if \(x \in (0,\pi)\text{,}\) letting \(\varphi(0)\) and \(\varphi(\pi)\) be arbitrary. Show that \(\lim \, s_N(\varphi;0) = \nicefrac{1}{2}\text{.}\)

  2. Let \(f\) be a \(2\pi\)-periodic function Riemann integrable on \([-\pi,\pi]\text{,}\) \(x \in \R\text{,}\) \(\delta > 0\text{,}\) and there are continuously differentiable \(g \colon [x-\delta,x] \to \C\) and \(h \colon [x,x+\delta] \to \C\) where \(f(t) = g(t)\) for all \(t \in [x-\delta,x)\) and where \(f(t) = h(t)\) for all \(t \in (x,x+\delta]\text{.}\) Then \(\lim\, s_N(f;x) = \frac{g(x)+h(x)}{2}\text{,}\) or in other words,

    \begin{equation*} \lim_{N \to \infty} s_N(f;x) = \frac{1}{2} \left( \lim_{t \to x^-} f(t) + \lim_{t \to x^+} f(t) \right) . \end{equation*}
Named after the French mathematician Jean-Baptiste Joseph Fourier 2  (1768–1830).
https://en.wikipedia.org/wiki/Joseph_Fourier
Eigenfunction is like an eigenvector for a matrix, but for a linear operator on a vector space of functions.
The notation seems similar to Fourier transform for those readers that have seen it. The similarity is not just coincidental, we are taking a type of Fourier transform here.
Named after the German astronomer, mathematician, physicist, and geodesist Friedrich Wilhelm Bessel 6  (1784–1846).
https://en.wikipedia.org/wiki/Friedrich_Bessel
Mathematicians in this field sometimes simplify matters by making a tongue-in-cheek definition that \(1=2\pi\text{.}\)
Named after the French mathematician Marc-Antoine Parseval 9  (1755–1836).
https://en.wikipedia.org/wiki/Marc-Antoine_Parseval
See G. H. Hardy, Weierstrass's Non-Differentiable Function, Transactions of the American Mathematical Society, 17, No. 3 (Jul., 1916), pp. 301–325.
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf