Fourier series^{ 1 } is perhaps the most important (and the most difficult) of the series that we cover in this book. We saw a few examples already, but let us start at the beginning.
Subsection11.8.1Trigonometric polynomials
A trigonometric polynomial is an expression of the form
So a trigonometric polynomial is really a rational function of the complex variable \(z\) (we are allowing negative powers) evaluated on the unit circle. There is a wonderful connection between power series (actually Laurent series because of the negative powers) and Fourier series because of this observation, but we will not investigate this further.
Another reason why Fourier series is important and comes up in so many applications is that the functions \(e^{inx}\) are eigenfunctions^{ 2 } of various differential operators. For example,
That is, they are the functions whose derivative is a scalar (the eigenvalue) times itself. Just as eigenvalues and eigenvectors are important in studying matrices, eigenvalues and eigenfunctions are important when studying linear differential equations.
The functions \(\cos (nx)\text{,}\)\(\sin (nx)\text{,}\) and \(e^{inx}\) are \(2\pi\)-periodic and hence trigonometric polynomials are also \(2\pi\)-periodic. We could rescale \(x\) to make the period different, but the theory is the same, so we stick with the period \(2\pi\text{.}\) The antiderivative of \(e^{inx}\) is \(\frac{e^{inx}}{in}\) and so
We just found a way of computing the coefficients \(c_m\) using an integral of \(f\text{.}\) If \(\sabs{m} > N\text{,}\) the integral is 0, so we might as well have included enough zero coefficients to make \(\sabs{m} \leq N\text{.}\)
Proposition11.8.1.
A trigonometric polynomial \(f(x) = \sum_{n=-N}^N c_n\, e^{inx}\) is real-valued for real \(x\) if and only if \(c_{-m} = \overline{c_m}\) for all \(m=-N,\ldots,N\text{.}\)
Proof.
If \(f(x)\) is real-valued, that is \(\overline{f(x)} = f(x)\text{,}\) then
is called the Fourier series and the numbers \(c_n\) the Fourier coefficients. Using Euler’s formula \(e^{i\theta} = \cos(\theta) + i \sin (\theta)\text{,}\) we could also develop everything with sines and cosines, that is, as the series \(a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)\text{.}\) It is equivalent, but slightly more messy.
Several questions arise. What functions are expressible as Fourier series? Obviously, they have to be \(2\pi\)-periodic, but not every periodic function is expressible with the series. Furthermore, if we do have a Fourier series, where does it converge (where and if at all)? Does it converge absolutely? Uniformly? Also note that the series has two limits. When talking about Fourier series convergence, we often talk about the following limit:
There are other ways we can sum the series to get convergence in more situations, but we refrain from discussing those. In light of this, define the symmetric partial sums
its Fourier coefficients. To emphasize the function the coefficients belong to, we write \(\hat{f}(n)\text{.}\)^{ 3 } We then formally write down a Fourier series:
As you might imagine such a series might not even converge. The \(\sim\) doesn’t imply anything about the two sides being equal in any way. It is simply that we created a formal series using the formula for the coefficients. We will see that when the functions are “nice enough,” we do get convergence.
Example11.8.3.
Consider the step function \(h(x)\) so that \(h(x) \coloneqq 1\) on \([0,\pi]\) and \(h(x) \coloneqq -1\) on \((-\pi,0)\text{,}\) extended periodically to a \(2\pi\)-periodic function. With a little bit of calculus, we compute the coefficients:
See the left hand graph in Figure 11.11 for a graph of \(h\) and several symmetric partial sums.
For a second example, consider the function \(g(x) \coloneqq \sabs{x}\) on \([-\pi,\pi]\) and then extended to a \(2\pi\)-periodic function. Computing the coefficients, we find
Note that for both \(f\) and \(g\text{,}\) the even coefficients (except \(\hat{g}(0)\)) happen to vanish, but that is not really important. What is important is convergence. First, at the discontinuity at \(x=0\text{,}\) we find \(s_N(h;0) = 0\) for all \(N\text{,}\) so \(s_N(h;0)\) converges to a different number from \(h(0)\) (at a nice enough jump discontinuity, the limit is the average of the two-sided limits, see the exercises). That should not be surprising; the coefficients are computed by an integral, and integration does not notice if the value of a function changes at a single point. We should remark, however, that we are not guaranteed that in general the Fourier series converges to the function even at a point where the function is continuous. We will prove convergence if the function is at least Lipschitz.
What is really important is how fast the coefficients go to zero. For the discontinuous \(h\text{,}\) the coefficients \(\hat{h}(n)\) go to zero approximately like \(\nicefrac{1}{n}\text{.}\) On the other hand, for the continuous \(g\text{,}\) the coefficients \(\hat{g}(n)\) go to zero approximately like \(\nicefrac{1}{n^2}\text{.}\) The Fourier coefficients “see” the discontinuity in some sense.
Do note that continuity in this setting is the continuity of the periodic extension, that is, we include the endpoints \(\pm \pi\text{.}\) So the function \(f(x) = x\) defined on \((-\pi,\pi]\) and extended periodically would be discontinuous at the endpoints \(\pm\pi\text{.}\)
In general, the relationship between regularity of the function and the rate of decay of the coefficients is somewhat more complicated than the example above might make it seem, but there are some quick conclusions we can make. We forget about finding a series for a function for a moment, and we consider simply the limit of some given series. A few sections ago, we proved that the Fourier series
converges uniformly and hence converges to a continuous function. This example and its proof can be extended to a more general criterion.
Proposition11.8.4.
Let \(\sum_{n=-\infty}^\infty c_n\, e^{inx}\) be a Fourier series, and \(C\text{,}\)\(\alpha > 1\) constants such that
\begin{equation*}
\sabs{c_n} \leq \frac{C}{\sabs{n}^\alpha}
\qquad \text{for all } n \in \Z \setminus \{ 0 \}.
\end{equation*}
Then the series converges (absolutely and uniformly) to a continuous function on \(\R\text{.}\)
The proof is to apply the Weierstrass \(M\)-test (Theorem 11.2.4) and the \(p\)-series test to find that the series converges uniformly and hence to a continuous function (Corollary 11.2.8). We can also take derivatives.
Proposition11.8.5.
Let \(\sum_{n=-\infty}^\infty c_n\, e^{inx}\) be a Fourier series, and \(C\text{,}\)\(\alpha > 2\) constants such that
\begin{equation*}
\sabs{c_n} \leq \frac{C}{\sabs{n}^\alpha}
\qquad \text{for all } n \in \Z \setminus \{ 0 \}.
\end{equation*}
Then the series converges to a continuously differentiable function on \(\R\text{.}\)
The proof is to note that the series converges to a continuous function by the previous proposition. In particular, it converges at some point. Then differentiate the partial sums
\begin{equation*}
\sum_{n=-N}^{N}
i n c_n \,e^{inx}
\end{equation*}
and notice that for all nonzero \(n\)
\begin{equation*}
\sabs{i n c_n} \leq \frac{C}{\sabs{n}^{\alpha-1}} .
\end{equation*}
The differentiated series converges uniformly by the \(M\)-test again. Since the differentiated series converges uniformly, we find that the original series \(\sum_{n=-\infty}^\infty c_n\,e^{inx}\) converges to a continuously differentiable function, whose derivative is the differentiated series (see Theorem 11.2.14).
We can iterate this reasoning. Suppose there is some \(C\) and \(\alpha > k+1\) (\(k \in \N\)) such that for all nonzero integers \(n\text{,}\)
Then the Fourier series converges to a \(k\)-times continuously differentiable function. Therefore, the faster the coefficients go to zero, the more regular the limit is.
Subsection11.8.3Orthonormal systems
Let us abstract away the exponentials, and study a more general series for a function. One fundamental property of the exponentials that makes Fourier series work is that the exponentials are a so-called orthonormal system. Fix an interval \([a,b]\text{.}\) We define an inner product for the space of functions. We restrict our attention to Riemann integrable functions as we do not have the Lebesgue integral, which would be the natural choice. Let \(f\) and \(g\) be complex-valued Riemann integrable functions on \([a,b]\) and define the inner product
\begin{equation*}
\langle f , g \rangle \coloneqq
\int_a^b f(x) \overline{g(x)} \, dx .
\end{equation*}
If you have seen Hermitian inner products in linear algebra, this is precisely such a product. We must include the conjugate as we are working with complex numbers. We then have the “size” of \(f\text{,}\) that is, the \(L^2\) norm \(\snorm{f}_2\text{,}\) by (defining the square)
\begin{equation*}
\snorm{f}_2^2 \coloneqq
\langle f , f \rangle =
\int_a^b \sabs{f(x)}^2 \, dx .
\end{equation*}
Remark11.8.6.
Note the similarity to finite dimensions. For \(z = (z_1,z_2,\ldots,z_d) \in \C^d\text{,}\) one defines
\begin{equation*}
\langle z , w \rangle \coloneqq
\sum_{n=1}^d z_n \overline{w_n} .
\end{equation*}
Then the norm is (usually denoted simply by \(\snorm{z}\) in \(\C^d\) rather than by \(\snorm{z}_2\))
\begin{equation*}
\snorm{z}^2 =
\langle z , z \rangle =
\sum_{n=1}^d \sabs{z_n}^2 .
\end{equation*}
This is just the euclidean distance to the origin in \(\C^d\) (same as \(\R^{2d}\)).
In what follows, we will assume all functions are Riemann integrable.
Definition11.8.7.
Let \(\{ \varphi_n \}_{n=1}^\infty\) be a sequence of integrable complex-valued functions on \([a,b]\text{.}\) We say that this is an orthonormal system if
In particular, \(\snorm{\varphi_n}_2 = 1\) for all \(n\text{.}\) If we only require that \(\langle \varphi_n , \varphi_m \rangle = 0\) for \(m\not= n\text{,}\) then the system would be called an orthogonal system.
is an orthonormal system on \([-\pi,\pi]\text{.}\) The factor out in front is to make the norm be 1.
Having an orthonormal system \(\{ \varphi_n \}_{n=1}^\infty\) on \([a,b]\) and an integrable function \(f\) on \([a,b]\text{,}\) we can write a Fourier series relative to \(\{ \varphi_n \}_{n=1}^\infty\text{.}\) Let
\begin{equation*}
\sum_{n=1}^\infty \langle f , \varphi_n \rangle \varphi_n(x) .
\end{equation*}
Notice the similarity to the expression for the orthogonal projection of a vector onto a subspace from linear algebra. We are in fact doing just that, but in a space of functions.
Theorem11.8.8.
Suppose \(f\) is a Riemann integrable function on \([a,b]\text{.}\) Let \(\{ \varphi_n \}_{n=1}^\infty\) be an orthonormal system on \([a,b]\) and suppose
by the calculation above. We take a limit to obtain the so-called Bessel’s inequality.
Theorem11.8.9.Bessel’s inequality.
^{ 4 } Suppose \(f\) is a Riemann integrable function on \([a,b]\text{.}\) Let \(\{ \varphi_n \}_{n=1}^\infty\) be an orthonormal system on \([a,b]\) and suppose
Subsection11.8.4The Dirichlet kernel and approximate delta functions
We return to the trigonometric Fourier series. The system \(\{ e^{inx} \}_{n=1}^\infty\) is orthogonal, but not orthonormal if we simply integrate over \([-\pi,\pi]\text{.}\) We can rescale the integral and hence the inner product to make \(\{ e^{inx} \}_{n=1}^\infty\) orthonormal. That is, if we replace
(we are just rescaling the \(dx\) really)^{ 5 }, then everything works and we obtain that the system \(\{ e^{inx} \}_{n=1}^\infty\) is orthonormal with respect to the inner product
\begin{equation*}
\langle f , g \rangle =
\frac{1}{2\pi} \int_{-\pi}^\pi f(x) \, \overline{g(x)} \, dx .
\end{equation*}
Suppose \(f \colon \R \to \C\) is \(2\pi\)-periodic and integrable on \([-\pi,\pi]\text{.}\) Write
Recall the notation for the symmetric partial sums, \(s_N(f;x) \coloneqq
\sum_{n=-N}^N c_n \,e^{inx}\text{.}\) The inequality leading up to Bessel now reads:
for \(x\) such that \(\sin(\nicefrac{x}{2}) \not= 0\text{.}\) The left-hand side is continuous on \(\R\text{,}\) and hence the right-hand side extends continuously to all of \(\R\text{.}\) To show the claim, we use a familiar trick:
See Figure 11.12 for a plot of \(D_N\) for \(N=5\) and \(N=20\text{.}\)
The central peak gets taller and taller as \(N\) gets larger, and the side peaks stay small. We are convolving (again) with approximate delta functions, although these functions have all these oscillations away from zero. The oscillations on the side do not go away but they are eventually so fast that we expect the integral to just sort of cancel itself out there. Overall, we expect that \(s_N(f)\) goes to \(f\text{.}\) Things are not always simple, but under some conditions on \(f\text{,}\) such a conclusion holds. For this reason people write
where \(\delta\) is the “delta function” (not really a function), which is an object that will give something like “\(\int_{-\pi}^{\pi} f(x-t) \delta(t) \, dt = f(x)\text{.}\)” We can think of \(D_N(x)\) converging in some sense to \(2 \pi\, \delta(x)\text{.}\) However, we have not defined (and will not define) what the delta function is, nor what does it mean for it to be a limit of \(D_N\) or have a Fourier series.
Subsection11.8.5Localization
If \(f\) satisfies a Lipschitz condition at a point, then the Fourier series converges at that point.
Theorem11.8.10.
Let \(x\) be fixed and let \(f\) be a \(2\pi\)-periodic function Riemann integrable on \([-\pi,\pi]\text{.}\) Suppose there exist \(\delta > 0\) and \(M\) such that
\begin{equation*}
\sabs{f(x+t)-f(x)} \leq M \sabs{t}
\end{equation*}
In particular, if \(f\) is continuously differentiable at \(x\text{,}\) then we obtain convergence at \(x\) (exercise). A function \(f \colon [a,b] \to \C\) is continuous piecewise smooth if it is continuous and there exist points \(x_0 = a < x_1 < x_2 < \cdots < x_k = b\) such that for every \(j\text{,}\)\(f\) restricted to \([x_j,x_{j+1}]\) is continuously differentiable (up to the endpoints).
Corollary11.8.11.
Let \(f\) be a \(2\pi\)-periodic function Riemann integrable on \([-\pi,\pi]\text{.}\) Suppose there exist \(x\in \R\) and \(\delta > 0\) such that \(f\) is continuous piecewise smooth on \([x-\delta,x+\delta]\text{,}\) then
As \(\sin(\theta) = \theta + h(\theta)\) where \(\frac{h(\theta)}{\theta} \to
0\) as \(\theta \to 0\text{,}\) we notice that \(\frac{M\sabs{t}}{\sabs{\sin(\nicefrac{t}{2})}}\) is continuous at the origin. Hence, \(\frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})}\text{,}\) as a function of \(t\text{,}\) is bounded near the origin. As \(t=0\) is the only place on \([-\pi,\pi]\) where the denominator vanishes, it is the only place where there could be a problem. So, the function is bounded near \(t=0\) and clearly Riemann integrable on any interval not including \(0\text{,}\) and thus it is Riemann integrable on \([-\pi,\pi]\text{.}\) We use the trigonometric identity
As functions of \(t\text{,}\)\(\frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \cos (\nicefrac{t}{2})\) and \(\bigl( f(x-t) - f(x) \bigr)\) are bounded Riemann integrable functions and so their Fourier coefficients go to zero by Theorem 11.8.9. So the two integrals on the right-hand side, which compute the Fourier coefficients for the real version of the Fourier series go to 0 as \(N\) goes to infinity. This is because \(\sin(Nt)\) and \(\cos(Nt)\) are also orthonormal systems with respect to the same inner product. Hence \(s_N(f;x)-f(x)\) goes to 0, that is, \(s_N(f;x)\) goes to \(f(x)\text{.}\)
The theorem also says that convergence depends only on local behavior. That is, to understand convergence of \(s_N(f;x)\) we only need to know \(f\) in some neighborhood of \(x\text{.}\)
Corollary11.8.12.
Suppose \(f\) is a \(2\pi\)-periodic function, Riemann integrable on \([-\pi,\pi]\text{.}\) If \(J\) is an open interval and \(f(x) = 0\) for all \(x \in J\text{,}\) then \(\lim\limits_{N\to\infty} s_N(f;x) = 0\) for all \(x \in J\text{.}\)
In particular, if \(f\) and \(g\) are \(2\pi\)-periodic functions, Riemann integrable on \([-\pi,\pi]\text{,}\)\(J\) an open interval, and \(f(x) = g(x)\) for all \(x \in J\text{,}\) then for all \(x \in J\text{,}\) the sequence \(\bigl\{ s_N(f;x) \bigr\}_{N=1}^\infty\) converges if and only if \(\bigl\{ s_N(g;x) \bigr\}_{N=1}^\infty\) converges.
The first claim follows by taking \(M=0\) in the theorem. The “In particular” follows by considering \(f-g\text{,}\) which is zero on \(J\) and \(s_N(f-g) = s_N(f) - s_N(g)\text{.}\) So convergence at \(x\) depends only on the values of the function near \(x\text{.}\) However, we saw that the rate of convergence, that is, how fast does \(s_N(f)\) converge to \(f\text{,}\) depends on global behavior of \(f\text{.}\)
Note a subtle difference between the results above and what Stone–Weierstrass theorem gives. Any continuous function on \([-\pi,\pi]\) can be uniformly approximated by trigonometric polynomials, but these trigonometric polynomials may not be the partial sums \(s_N\text{.}\)
Subsection11.8.6Parseval’s theorem
Finally, convergence always happens in the \(L^2\) sense and operations on the (infinite) vectors of Fourier coefficients are the same as the operations using the integral inner product.
Theorem11.8.13.Parseval.
^{ 6 } Let \(f\) and \(g\) be \(2\pi\)-periodic functions, Riemann integrable on \([-\pi,\pi]\) with
Via Stone–Weierstrass, approximate \(h\) with a trigonometric polynomial uniformly. That is, there is a trigonometric polynomial \(P(x)\) such that \(\sabs{h(x) - P(x)} < \epsilon\) for all \(x\text{.}\) Hence
The right-hand side goes to 0 as \(N\) goes to infinity by the first claim of the theorem. That is, as \(N\) goes to infinity, \(\langle s_N(f),g \rangle\) goes to \(\langle f,g \rangle\text{,}\) and the second claim is proved. The last claim in the theorem follows by using \(g=f\text{.}\)
Show that the series converges uniformly and absolutely to a continuous function. Remark: This is another example of a nowhere differentiable function (you do not have to prove that)^{ 7 }. See Figure 11.13.
Exercise11.8.2.
Suppose that a \(2\pi\)-periodic function that is Riemann integrable on \([-\pi,\pi]\text{,}\) and such that \(f\) is continuously differentiable on some open interval \((a,b)\text{.}\) Prove that for every \(x \in (a,b)\text{,}\) we have \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\)
Exercise11.8.3.
Prove Corollary 11.8.11, that is, suppose a \(2\pi\)-periodic function is continuous piecewise smooth near a point \(x\text{,}\) then \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\) Hint: See the previous exercise.
Exercise11.8.4.
Given a \(2\pi\)-periodic function \(f \colon \R \to \C\text{,}\) Riemann integrable on \([-\pi,\pi]\text{,}\) and \(\epsilon > 0\text{,}\) show that there exists a continuous \(2\pi\)-periodic function \(g \colon \R
\to \C\) such that \(\snorm{f-g}_2 < \epsilon\text{.}\)
Exercise11.8.5.
Prove the Cauchy–Bunyakovsky–Schwarz inequality for Riemann integrable functions:
Suppose for some \(C\) and \(\alpha > 1\text{,}\) we have a real sequence \(\{ a_n \}_{n=1}^\infty\) with \(\abs{a_n} \leq \frac{C}{n^\alpha}\) for all \(n\text{.}\) Let
Formally (that is, suppose you can differentiate under the sum) find a solution (formal solution, that is, do not yet worry about convergence) to the differential equation
Suppose that \(c_n = 0\) for all \(n < 0\) and \(\sum_{n=0}^\infty \sabs{c_n}\) converges. Let \(\D \coloneqq B(0,1) \subset \C\) be the unit disc, and \(\overline{\D} = C(0,1)\) be the closed unit disc. Show that there exists a continuous function \(f \colon \overline{\D} \to \C\) that is analytic on \(\D\) and such that on the boundary of \(\D\) we have \(f(e^{i\theta}) = \sum_{n=0}^\infty c_n e^{in\theta}\text{.}\) Hint: If \(z=re^{i\theta}\text{,}\) then \(z^n = r^n e^{in\theta}\text{.}\)
converges to an infinitely differentiable function.
Exercise11.8.11.
Let \(f\) be a \(2\pi\)-periodic function such that \(f(x) = f(0) + \int_0^x g\) for a function \(g\) that is Riemann integrable on every interval. Suppose
Show that there exists a \(C > 0\) such that \(\sabs{c_n} \leq \frac{C}{\sabs{n}}\) for all nonzero \(n\text{.}\)
Exercise11.8.12.
Let \(\varphi\) be the \(2\pi\)-periodic function defined by \(\varphi(x) \coloneqq 0\) if \(x \in (-\pi,0)\text{,}\) and \(\varphi(x) \coloneqq 1\) if \(x \in (0,\pi)\text{,}\) letting \(\varphi(0)\) and \(\varphi(\pi)\) be arbitrary. Show that \(\lim\limits_{N \to \infty} s_N(\varphi;0) = \nicefrac{1}{2}\text{.}\)
Let \(f\) be a \(2\pi\)-periodic function Riemann integrable on \([-\pi,\pi]\text{,}\)\(x \in \R\text{,}\)\(\delta > 0\text{,}\) and there are continuously differentiable \(g \colon [x-\delta,x] \to \C\) and \(h \colon [x,x+\delta] \to \C\) where \(f(t) = g(t)\) for all \(t \in [x-\delta,x)\) and where \(f(t) = h(t)\) for all \(t \in (x,x+\delta]\text{.}\) Then \(\lim\limits_{N\to\infty} s_N(f;x) = \frac{g(x)+h(x)}{2}\text{,}\) or in other words,
Let \(\{ a_n \}_{n=1}^\infty\) be such that \(\lim_{n\to \infty} a_n = 0\text{.}\) Show that there is a continuous \(2\pi\)-periodic function \(f\) whose Fourier coefficients \(c_{n}\) satisfy that for each \(N\) there is a \(k \geq N\) where \(\sabs{c_k} \geq
a_k\text{.}\) Remark: The exercise says that if \(f\) is only continuous, there is no “minimum rate of decay” of the coefficients. Compare with Exercise 11.8.11. Hint: Look at Exercise 11.8.1 for inspiration.
Eigenfunction is like an eigenvector for a matrix, but for a linear operator on a vector space of functions.
The notation should seem similar to Fourier transform to those readers that have seen it. The similarity is not just coincidental, we are taking a type of Fourier transform here.
Named after the German astronomer, mathematician, physicist, and geodesist Friedrich Wilhelm Bessel (1784–1846).
Mathematicians in this field sometimes simplify matters with a tongue-in-cheek definition that \(1=2\pi\text{.}\)
See G. H. Hardy, Weierstrass’s Non-Differentiable Function, Transactions of the American Mathematical Society, 17, No. 3 (Jul., 1916), pp. 301–325. A thing to notice here is the \(n\)th Fourier coefficient is \(\nicefrac{1}{n}\) if \(n=2^k\) and zero otherwise, so the coefficients go to zero like \(\nicefrac{1}{n}\text{.}\)
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf