Section 11.8 Fourier series
Note: 3–4 lectures
Fourier series^{ 1 } is perhaps the most important (and most difficult to understand) of the series that we cover in this book. We have seen it in a few examples before, but let us start at the beginning.
Subsection 11.8.1 Trigonometric polynomials
A trigonometric polynomial is an expression of the form
or equivalently, thanks to Euler's formula (\(e^{i\theta} = \cos(\theta) + i \sin(\theta)\)):
The second form is usually more convenient. If \(z \in \C\) with \(\sabs{z}=1,\) we write \(z = e^{ix}\text{,}\) and so
So a trigonometric polynomial is really a rational function of the complex variable \(z\) (we are allowing negative powers) evaluated on the unit circle. There is a wonderful connection between power series (actually Laurent series because of the negative powers) and Fourier series because of this observation, but we will not investigate this further.
Another reason why Fourier series are important and come up in so many applications is that the functions are eigenfunctions^{ 3 } of various differential operators. For example,
That is, they are the functions whose derivative is a scalar (the eigenvalue) times itself. Just as eigenvalues and eigenvectors are important in studying matrices, eigenvalues and eigenfunctions are important when studying linear differential equations.
The functions \(\cos (nx)\text{,}\) \(\sin (nx)\text{,}\) and \(e^{inx}\) are \(2\pi\)periodic and hence trigonometric polynomials are also \(2\pi\)periodic. We could rescale \(x\) to make the period different, but the theory is the same, so let us stick with the period of \(2\pi\text{.}\) The antiderivative of \(e^{inx}\) is \(\frac{e^{inx}}{in}\) and so
Consider
and for \(m=N,\ldots,N\) compute
We just found a way of computing the coefficients \(c_m\) using an integral of \(f\text{.}\) If \(\sabs{m} > N\text{,}\) the integral is just 0: We might as well have included enough zero coefficients to make \(\sabs{m} \leq N\text{.}\)
Proposition 11.8.1.
A trigonometric polynomial \(f(x) = \sum_{n=N}^N c_n\, e^{inx}\) is realvalued for real \(x\) if and only if \(c_{m} = \overline{c_m}\) for all \(m=N,\ldots,N\text{.}\)
Proof.
If \(f(x)\) is realvalued, that is \(\overline{f(x)} = f(x)\text{,}\) then
The complex conjugate goes inside the integral because the integral is done on real and imaginary parts separately.
On the other hand if \(c_{m} = \overline{c_m}\text{,}\) then
which is real valued. Also \(c_0 = \overline{c_0}\text{,}\) so \(c_0\) is real. By pairing up the terms we obtain that \(f\) has to be realvalued.
The functions \(e^{inx}\) are also linearly independent.
Proposition 11.8.2.
If
for all \(x \in [\pi,\pi]\text{,}\) then \(c_n = 0\) for all \(n\text{.}\)
Proof.
The result follows immediately from the integral formula for \(c_n\text{.}\)
Subsection 11.8.2 Fourier series
We now take limits. We call the series
the Fourier series. The numbers \(c_n\) are called Fourier coefficients. Using Euler's formula \(e^{i\theta} = \cos(\theta) + i \sin (\theta)\text{,}\) we could also develop everything with sines and cosines, that is, as the series \(a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)\text{,}\) but it is equivalent and slightly more messy.
Several questions arise. What functions are expressible as Fourier series? Obviously, they have to be \(2\pi\)periodic, but not every periodic function is expressible with the series. Furthermore, if we do have a Fourier series, where does it converge (where and if at all)? Does it converge absolutely? Uniformly? Also note that the series has two limits. When talking about Fourier series convergence, we often talk about the following limit:
There are other ways we can sum the series that can get convergence in more situations, but we refrain from discussing those.
Conversely, we start with an integrable function \(f \colon [\pi,\pi] \to \C\text{,}\) and we call the numbers
its Fourier coefficients. Often these numbers are written as \(\hat{f}(n)\text{.}\)^{ 4 } We then formally write down a Fourier series. As you might imagine such a series might not even converge. We write
although the \(\sim\) doesn't imply anything about the two sides being equal in any way. It is simply that we created a formal series using the formula for the coefficients.
A few sections ago, we proved that the Fourier series
converges uniformly and hence converges to a continuous function. This example and its proof can be extended to a more general criterion.
Proposition 11.8.3.
Let \(\sum_{n=\infty}^\infty c_n\, e^{inx}\) be a Fourier series, and \(C\text{,}\) \(\alpha > 1\) constants such that
Then the series converges (absolutely and uniformly) to a continuous function on \(\R\text{.}\)
The proof is to apply the Weierstrass \(M\)test (Theorem 11.2.4) and the \(p\)series test, to find that the series converges uniformly and hence to a continuous function (Corollary 11.2.8). We can also take derivatives.
Proposition 11.8.4.
Let \(\sum_{n=\infty}^\infty c_n\, e^{inx}\) be a Fourier series, and \(C\text{,}\) \(\alpha > 2\) constants such that
Then the series converges to a continuously differentiable function on \(\R\text{.}\)
The trick is to first notice that the series converges first to a continuous function by the previous proposition, so in particular it converges at some point. Then differentiate the partial sums
and notice that for all nonzero \(n\)
The differentiated series converges uniformly by the \(M\)test again. Since the differentiated series converges uniformly, we find that the original series \(\sum c_n\,e^{inx}\) converges to a continuously differentiable function, whose derivative is the differentiated series (see Theorem 11.2.14).
We can iterate the same reasoning. Suppose there is some \(C\) and \(\alpha > k+1\) (\(k \in \N\)) such that
for all nonzero integers \(n\text{.}\) Then the Fourier series converges to a \(k\)times continuously differentiable function. Therefore, the faster the coefficients go to zero, the more regular the limit is.
Subsection 11.8.3 Orthonormal systems
Let us abstract away some of the properties of the exponentials, and study a more general series for a function. One fundamental property of the exponentials that make Fourier series what it is that the exponentials are a socalled orthonormal system. Let us fix an interval \([a,b]\text{.}\) We define an inner product for the space of functions. We restrict our attention to Riemann integrable functions since we do not have the Lebesgue integral, which would be the natural choice. Let \(f\) and \(g\) be complexvalued Riemann integrable functions on \([a,b]\) and define the inner product
If you have seen Hermitian inner products in linear algebra, this is precisely such a product. We have to put in the conjugate as we are working with complex numbers. We then have the “size,” that is the \(L^2\) norm \(\snorm{f}_2\) by (defining the square)
Remark 11.8.5.
Notice the similarity to finite dimensions. For \(z = (z_1,z_2,\ldots,z_n) \in \C^n\text{,}\) we define
Then the norm is (usually denoted by simply \(\snorm{z}\) in \(\C^n\) rather than \(\snorm{z}_2\))
This is just the euclidean distance to the origin in \(\C^n\) (same as \(\R^{2n}\)).
Let us get back to function spaces. In what follows, we will assume all functions are Riemann integrable.
Definition 11.8.6.
Let \(\{ \varphi_n \}\) be a sequence of integrable complexvalued functions on \([a,b]\text{.}\) We say that this is an orthonormal system if
In particular, \(\snorm{\varphi_n}_2 = 1\) for all \(n\text{.}\) If we only require that \(\langle \varphi_n , \varphi_m \rangle = 0\) for \(m\not= n\text{,}\) then the system would be called an orthogonal system.
We noticed above that
is an orthonormal system. The factor out in front is to make the norm be 1.
Having an orthonormal system \(\{ \varphi_n \}\) on \([a,b]\) and an integrable function \(f\) on \([a,b]\text{,}\) we can write a Fourier series relative to \(\{ \varphi_n \}\text{.}\) We let
and write
In other words, the series is
Notice the similarity to the expression for the orthogonal projection of a vector onto a subspace from linear algebra. We are in fact doing just that, but in a space of functions.
Theorem 11.8.7.
Suppose \(f\) is a Riemann integrable function on \([a,b]\text{.}\) Let \(\{ \varphi_n \}\) be an orthonormal system on \([a,b]\) and suppose
If
for some other sequence \(\{ d_k \}\text{,}\) then
with equality only if \(d_k = c_k\) for all \(k=1,2,\ldots,n\text{.}\)
In other words, the partial sums of the Fourier series are the best approximation with respect to the \(L^2\) norm.
Proof.
Let us write
Now
and
So
This is minimized precisely when \(d_k = c_k\text{.}\)
When we do plug in \(d_k = c_k\text{,}\) then
and so
for all \(n\text{.}\) Note that
by the calculation above. We take a limit to obtain the socalled Bessel's inequality.
Theorem 11.8.8. Bessel's inequality.
^{ 5 } Suppose \(f\) is a Riemann integrable function on \([a,b]\text{.}\) Let \(\{ \varphi_n \}\) be an orthonormal system on \([a,b]\) and suppose
Then
In particular (given that a Riemann integrable function satisfies \(\int_a^b \sabs{f}^2 < \infty\)), we get that the series converges and hence
Subsection 11.8.4 The Dirichlet kernel and approximate delta functions
Let us return to the trigonometric Fourier series. Here we note that the system \(\{ e^{inx} \}\) is orthogonal, but not orthonormal if we simply integrate over \([\pi,\pi]\text{.}\) We can also rescale the integral and hence the inner product to make \(\{ e^{inx} \}\) orthonormal. That is, if we replace
(we are just rescaling the \(dx\) really)^{ 7 }, then everything works and we obtain that the system \(\{ e^{inx} \}\) is orthonormal with respect to the inner product
Suppose \(f \colon \R \to \C\) is \(2\pi\)periodic and integrable on \([\pi,\pi]\text{.}\) Let
Write
Define the symmetric partial sums
The inequality leading up to Bessel now reads:
The Dirichlet kernel is the sum
We claim that
at least for \(x\) such that \(\sin(\nicefrac{x}{2}) \not= 0\text{.}\) We know that the lefthand side is continuous and hence the righthand side extends continuously to all of \(\R\) as well. To show the claim we use a familiar trick:
Multiply by \(e^{ix/2}\)
The claim follows.
We expand the definition of \(s_N\)
If you replace \(xt\) with \(tx\) (\(D_N\) is even), we see that convolution strikes again! As \(D_N\) and \(f\) are \(2\pi\)periodic, we may also change variables and write
See Figure 11.10 for a plot of \(D_N\) for \(N=5\) and \(N=20\text{.}\)
The central peak gets taller and taller as \(N\) gets larger, and the side peaks stay small. We are convolving (again) with approximate delta functions, although these functions have all these oscillations away from zero. The oscillations on the side do not go away but they are eventually so fast that we expect the integral to just sort of cancel itself out there. Overall, we expect that \(s_N(f)\) goes to \(f\text{.}\) Things are not always simple, but under some conditions on \(f\text{,}\) such a conclusion holds. For this reason people write
where \(\delta\) is the “delta function” (not really a function), which is an object that will give something like “\(\int_{\pi}^{\pi} f(xt) \delta(t) \, dt = f(x)\text{.}\)” We can think of \(D_N(x)\) converging in some sense to \(2 \pi\, \delta(x)\text{.}\) However, we have not defined (and will not define) what kind of an object the delta function is, nor what does it mean for it to be a limit of \(D_N\) or have a Fourier series.
Subsection 11.8.5 Localization
If \(f\) satisfies a Lipschitz condition at a point, then the Fourier series converges at that point.
Theorem 11.8.9.
Let \(x\) be fixed and let \(f\) be a \(2\pi\)periodic function Riemann integrable on \([\pi,\pi]\text{.}\) Suppose there exist \(\delta > 0\) and \(M\) such that
for all \(t \in (\delta,\delta)\text{,}\) then
In particular, if \(f\) is continuously differentiable at \(x\text{,}\) then we obtain convergence (exercise). We state an often used version of this corollary. A function \(f \colon [a,b] \to \C\) is continuous piecewise smooth if it is continuous and there exist points \(x_0 = a < x_1 < x_2 < \cdots < x_k = b\) such that \(f\) restricted to \([x_j,x_{j+1}]\) is continuously differentiable (up to the endpoints) for all \(j\text{.}\)
Corollary 11.8.10.
Let \(f\) be a \(2\pi\)periodic function Riemann integrable on \([\pi,\pi]\text{.}\) Suppose there exist \(x\in \R\) and \(\delta > 0\) such that \(f\) is continuous piecewise smooth on \([x\delta,x+\delta]\text{,}\) then
The proof of the corollary is left as an exercise. Let us prove the theorem.
Proof.
(Proof of Theorem 11.8.9) For all \(N\text{,}\)
Write
By the hypotheses, for small nonzero \(t\) we get
As \(\sin(\theta) = \theta + h(\theta)\) where \(\frac{h(\theta)}{\theta} \to 0\) as \(\theta \to 0\text{,}\) we notice that \(\frac{M\sabs{t}}{\sabs{\sin(\nicefrac{t}{2})}}\) is continuous at the origin and hence \(\frac{f(xt)  f(x)}{\sin(\nicefrac{t}{2})}\) must be bounded near the origin. As \(t=0\) is the only place on \([\pi,\pi]\) where the denominator vanishes, it is the only place where there could be a problem. The function is also Riemann integrable. We use a trigonometric identity
so
Now \(\frac{f(xt)  f(x)}{\sin(\nicefrac{t}{2})} \cos (\nicefrac{t}{2})\) and \(\bigl( f(xt)  f(x) \bigr)\) are bounded Riemann integrable functions and so their Fourier coefficients go to zero by Theorem 11.8.8. So the two integrals on the righthand side, which compute the Fourier coefficients for the real version of the Fourier series go to 0 as \(N\) goes to infinity. This is because \(\sin(Nt)\) and \(\cos(Nt)\) are also orthonormal systems with respect to the same inner product. Hence \(s_N(f;x)f(x)\) goes to 0, that is, \(s_N(f;x)\) goes to \(f(x)\text{.}\)
The theorem also says that convergence depends only on local behavior.
Corollary 11.8.11.
Suppose \(f\) is a \(2\pi\)periodic function, Riemann integrable on \([\pi,\pi]\text{.}\) If \(J\) is an open interval and \(f(x) = 0\) for all \(x \in J\text{,}\) then \(\lim\, s_N(f;x) = 0\) for all \(x \in J\text{.}\)
In particular, if \(f\) and \(g\) are \(2\pi\)periodic functions, Riemann integrable on \([\pi,\pi]\text{,}\) \(J\) an open interval, and \(f(x) = g(x)\) for all \(x \in J\text{,}\) then for all \(x \in J\text{,}\) the sequence \(\bigl\{ s_N(f;x) \bigr\}\) converges if and only if \(\bigl\{ s_N(g;x) \bigr\}\) converges.
That is, convergence at \(x\) is only dependent on the values of the function near \(x\text{.}\) To prove the first claim, take \(M=0\) in the theorem. The “In particular” follows by considering the function \(fg\text{,}\) which is zero on \(J\) and \(s_N(fg) = s_N(f)  s_N(g)\text{.}\) On the other hand, we have seen that the rate of convergence, that is how fast does \(s_N(f)\) converge to \(f\text{,}\) depends on global behavior of the function.
There is a subtle difference between the corollary and what can be achieved by the Stone–Weierstrass theorem. Any continuous function on \([\pi,\pi]\) can be uniformly approximated by trigonometric polynomials, but these trigonometric polynomials need not be the partial sums \(s_N\text{.}\)
Subsection 11.8.6 Parseval's theorem
Finally, convergence always happens in the \(L^2\) sense and operations on the (infinite) vectors of Fourier coefficients are the same as the operations using the integral inner product.
Theorem 11.8.12. Parseval.
^{ 8 } Let \(f\) and \(g\) be \(2\pi\)periodic functions, Riemann integrable on \([\pi,\pi]\) with
Then
Also
and
Proof.
There exists (exercise) a continuous \(2\pi\)periodic function \(h\) such that
Via Stone–Weierstrass, approximate \(h\) with a trigonometric polynomial uniformly. That is, there is a trigonometric polynomial \(P(x)\) such that \(\sabs{h(x)  P(x)} < \epsilon\) for all \(x\text{.}\) Hence
If \(P\) is of degree \(N_0\text{,}\) then for all \(N \geq N_0\)
as \(s_N(h)\) is the best approximation for \(h\) in \(L^2\) (Theorem 11.8.7). By the inequality leading up to Bessel, we have
The \(L^2\) norm satisfies the triangle inequality (exercise). Thus, for all \(N \geq N_0\text{,}\)
Hence, the first claim follows.
Next,
We need the Schwarz (or Cauchy–Schwarz or Cauchy–Bunyakovsky–Schwarz) inequality, that is,
This is left as an exercise. The proof is not really different from the finitedimensional version. So
The righthand side goes to 0 as \(N\) goes to infinity by the first claim of the theorem. That is, as \(N\) goes to infinity, \(\langle s_N(f),g \rangle\) goes to \(\langle f,g \rangle\text{,}\) and the second claim is proved. The last claim in the theorem follows by using \(g=f\text{.}\)
Subsection 11.8.7 Exercises
Exercise 11.8.1.
Consider the Fourier series
Show that the series converges uniformly and absolutely to a continuous function. Note: This is another example of a nowhere differentiable function (you do not have to prove that)^{ 10 }. See Figure 11.11.
Exercise 11.8.2.
Suppose that a \(2\pi\)periodic function that is Riemann integrable on \([\pi,\pi]\text{,}\) and such that \(f\) is continuously differentiable on some open interval \((a,b)\text{.}\) Prove that for every \(x \in (a,b)\text{,}\) we have \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\)
Exercise 11.8.3.
Prove Corollary 11.8.10, that is, suppose a \(2\pi\)periodic function is continuous piecewise smooth near a point \(x\text{,}\) then \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\) Hint: See the previous exercise.
Exercise 11.8.4.
Given a \(2\pi\)periodic function \(f \colon \R \to \C\) Riemann integrable on \([\pi,\pi]\text{,}\) and \(\epsilon > 0\text{.}\) Show that there exists a continuous \(2\pi\)periodic function \(g \colon \R \to \C\) such that \(\snorm{fg}_2 < \epsilon\text{.}\)
Exercise 11.8.5.
Prove the Cauchy–Bunyakovsky–Schwarz inequality for Riemann integrable functions:
Exercise 11.8.6.
Prove the \(L^2\) triangle inequality for Riemann integrable functions on \([\pi,\pi]\text{:}\)
Exercise 11.8.7.
Suppose for some \(C\) and \(\alpha > 1\text{,}\) we have a real sequence \(\{ a_n \}\) with \(\abs{a_n} \leq \frac{C}{n^\alpha}\) for all \(n\text{.}\) Let
Show that \(g\) is continuous.

Formally (that is, suppose you can differentiate under the sum) find a solution (formal solution, that is, do not yet worry about convergence) to the differential equation
\begin{equation*} y''+ 2 y = g(x) \end{equation*}of the form
\begin{equation*} y(x) = \sum_{n=1}^\infty b_n \sin(n x) . \end{equation*} Then show that this solution \(y\) is twice continuously differentiable, and in fact solves the equation.
Exercise 11.8.8.
Let \(f\) be a \(2\pi\)periodic function such that \(f(x) = x\) for \(0 < x < 2\pi\text{.}\) Use Parseval's theorem to find
Exercise 11.8.9.
Suppose that \(c_n = 0\) for all \(n < 0\) and \(\sum_{n=0}^\infty \sabs{c_n}\) converges. Let \(\D := B(0,1) \subset \C\) be the unit disc, and \(\overline{\D} = C(0,1)\) be the closed unit disc. Show that there exists a continuous function \(f \colon \overline{\D} \to \C\) that is analytic on \(\D\) and such that on the boundary of \(\D\) we have \(f(e^{i\theta}) = \sum_{n=0}^\infty c_n e^{i\theta}\text{.}\)
Hint: If \(z=re^{i\theta}\text{,}\) then \(z^n = r^n e^{in\theta}\text{.}\)
Exercise 11.8.10.
Show that
converges to an infinitely differentiable function.
Exercise 11.8.11.
Let \(f\) be a \(2\pi\)periodic function such that \(f(x) = f(0) + \int_0^x g\) for a function \(g\) that is Riemann integrable on every interval. Suppose
Show that there exists a \(C > 0\) such that \(\sabs{c_n} \leq \frac{C}{\sabs{n}}\text{.}\)
Exercise 11.8.12.
Let \(\varphi\) be the \(2\pi\)periodic function defined by \(\varphi(x) := 0\) if \(x \in (\pi,0)\text{,}\) and \(\varphi(x) := 1\) if \(x \in (0,\pi)\text{,}\) letting \(\varphi(0)\) and \(\varphi(\pi)\) be arbitrary. Show that \(\lim \, s_N(\varphi;0) = \nicefrac{1}{2}\text{.}\)

Let \(f\) be a \(2\pi\)periodic function Riemann integrable on \([\pi,\pi]\text{,}\) \(x \in \R\text{,}\) \(\delta > 0\text{,}\) and there are continuously differentiable \(g \colon [x\delta,x] \to \C\) and \(h \colon [x,x+\delta] \to \C\) where \(f(t) = g(t)\) for all \(t \in [x\delta,x)\) and where \(f(t) = h(t)\) for all \(t \in (x,x+\delta]\text{.}\) Then \(\lim\, s_N(f;x) = \frac{g(x)+h(x)}{2}\text{,}\) or in other words,
\begin{equation*} \lim_{N \to \infty} s_N(f;x) = \frac{1}{2} \left( \lim_{t \to x^} f(t) + \lim_{t \to x^+} f(t) \right) . \end{equation*}
https://en.wikipedia.org/wiki/Joseph_Fourier
https://en.wikipedia.org/wiki/Friedrich_Bessel
https://en.wikipedia.org/wiki/MarcAntoine_Parseval