# Basic Analysis I & II: Introduction to Real Analysis, Volumes I & II

## Section7.6Fixed point theorem and Picard’s theorem again

Note: 1 lecture (optional, does not require Section 6.3)
In this section we prove the fixed point theorem for contraction mappings. As an application we prove Picard’s theorem, which we proved without metric spaces in Section 6.3. The proof presented here is similar, but the proof goes a lot smoother with metric spaces and the fixed point theorem.

### Subsection7.6.1Fixed point theorem

#### Definition7.6.1.

Let $$(X,d_X)$$ and $$(Y,d_Y)$$ be metric spaces. A map $$\varphi \colon X \to Y$$ is a contraction (or a contractive map) if it is a $$k$$-Lipschitz map for some $$k < 1\text{,}$$ i.e. if there exists a $$k < 1$$ such that
\begin{equation*} d_Y\bigl(\varphi(p),\varphi(q)\bigr) \leq k\, d_X(p,q) \qquad \text{for all } p,q \in X. \end{equation*}
Given a map $$\varphi \colon X \to X\text{,}$$ a point $$x \in X$$ is called a fixed point if $$\varphi(x)=x\text{.}$$
The words complete and contraction are necessary. See Exercise 7.6.6.

#### Proof.

Pick $$x_0 \in X\text{.}$$ Define a sequence $$\{ x_n \}_{n=1}^\infty$$ by $$x_{n+1} \coloneqq \varphi(x_n)\text{.}$$ Then
\begin{equation*} d(x_{n+1},x_n) = d\bigl(\varphi(x_n),\varphi(x_{n-1})\bigr) \leq k d(x_n,x_{n-1}) . \end{equation*}
Repeating $$n$$ times, we get $$d(x_{n+1},x_n) \leq k^n d(x_1,x_0)\text{.}$$ For $$m > n\text{,}$$
\begin{equation*} \begin{split} d(x_m,x_n) & \leq \sum_{i=n}^{m-1} d(x_{i+1},x_i) \\ & \leq \sum_{i=n}^{m-1} k^i d(x_1,x_0) \\ & = k^n d(x_1,x_0) \sum_{i=0}^{m-n-1} k^i \\ & \leq k^n d(x_1,x_0) \sum_{i=0}^{\infty} k^i = k^n d(x_1,x_0) \frac{1}{1-k} . \end{split} \end{equation*}
In particular, the sequence is Cauchy (why?). Since $$X$$ is complete, we let $$x \coloneqq \lim_{n\to\infty} x_n\text{,}$$ and we claim that $$x$$ is our unique fixed point.
Fixed point? The function $$\varphi$$ is a contraction, so it is Lipschitz continuous:
\begin{equation*} \varphi(x) = \varphi\Bigl( \lim_{n\to\infty} x_n\Bigr) = \lim_{n\to\infty} \varphi(x_n) = \lim_{n\to\infty} x_{n+1} = x . \end{equation*}
Unique? Let $$x$$ and $$y$$ be fixed points.
\begin{equation*} d(x,y) = d\bigl(\varphi(x),\varphi(y)\bigr) \leq k\, d(x,y) . \end{equation*}
As $$k < 1\text{,}$$ the inequality means that $$d(x,y) = 0\text{,}$$ and hence $$x=y\text{.}$$ The theorem is proved.
The proof is constructive. Not only do we know a unique fixed point exists, we know how to find it. Start with any point $$x_0 \in X\text{,}$$ then iterate $$\varphi(x_0)\text{,}$$ $$\varphi(\varphi(x_0))\text{,}$$ $$\varphi(\varphi(\varphi(x_0)))\text{,}$$ etc. to find better and better approximations. We can even find how far away from the fixed point we are, see the exercises. The idea of the proof is therefore useful in real-world applications.

### Subsection7.6.2Picard’s theorem

We start with the metric space where we will apply the fixed point theorem: the space $$C\bigl([a,b],\R\bigr)$$ of Example 7.1.8, the space of continuous functions $$f \colon [a,b] \to \R$$ with the metric
\begin{equation*} d(f,g) \coloneqq \snorm{f-g}_{[a,b]} = \sup_{x \in [a,b]} \abs{f(x)-g(x)} . \end{equation*}
Convergence in this metric is convergence in uniform norm, or in other words, uniform convergence. Therefore, $$C\bigl([a,b],\R\bigr)$$ is a complete metric space, see Proposition 7.4.5.
Consider now the ordinary differential equation
\begin{equation*} \frac{dy}{dx} = F(x,y) . \end{equation*}
Given some $$x_0, y_0\text{,}$$ we desire a function $$y=f(x)$$ such that $$f(x_0) = y_0$$ and such that
\begin{equation*} f'(x) = F\bigl(x,f(x)\bigr) . \end{equation*}
To avoid having to come up with many names, we often simply write $$y' = F(x,y)$$ for the equation and $$y(x)$$ for the solution.
The simplest example is the equation $$y' = y\text{,}$$ $$y(0) = 1\text{.}$$ The solution is the exponential $$y(x) = e^x\text{.}$$ A somewhat more complicated example is $$y' = -2xy\text{,}$$ $$y(0) = 1\text{,}$$ whose solution is the Gaussian $$y(x) = e^{-x^2}\text{.}$$
A subtle issue is how long does the solution exist. Consider the equation $$y' = y^2\text{,}$$ $$y(0)=1\text{.}$$ Then $$y(x) = \frac{1}{1-x}$$ is a solution. While $$F$$ is a reasonably “nice” function and in particular it exists for all $$x$$ and $$y\text{,}$$ the solution “blows up” at $$x=1\text{.}$$ For more examples related to Picard’s theorem, see Section 6.3.
We will look for the solution in $$C\bigl([a,b],\R\bigr)\text{,}$$ which may feel strange at first as we are searching for a differentiable function. The explanation is that we consider the corresponding integral equation
\begin{equation*} f(x) = y_0 + \int_{x_0}^x F\bigl(t,f(t)\bigr)\,dt . \end{equation*}
To solve this integral equation we only need a continuous function, and in some sense our task should be easier—we have more candidate functions to try. This way of thinking is quite typical when solving differential equations.

#### Proof.

Without loss of generality, assume $$x_0 =0$$ (exercise). As $$I \times J$$ is compact and $$F$$ is continuous, $$F$$ is bounded. So find an $$M > 0$$ such that $$\abs{F(x,y)} \leq M$$ for all $$(x,y) \in I\times J\text{.}$$ Pick $$\alpha > 0$$ such that $$[-\alpha,\alpha] \subset I$$ and $$[y_0-\alpha, y_0 + \alpha] \subset J\text{.}$$ Let
\begin{equation*} h \coloneqq \min \left\{ \alpha, \frac{\alpha}{M+L\alpha} \right\} . \end{equation*}
Note $$[-h,h] \subset I\text{.}$$ Let
\begin{equation*} Y \coloneqq \bigl\{ f \in C\bigl([-h,h],\R\bigr) : f\bigl([-h,h]\bigr) \subset J \bigr\} . \end{equation*}
That is, $$Y$$ is the set of continuous functions on $$[-h,h]$$ with values in $$J\text{,}$$ in other words, exactly those functions where $$F\bigl(x,f(x)\bigr)$$ makes sense. It is left as an exercise to show that $$Y$$ is a closed subset of $$C\bigl([-h,h],\R\bigr)$$ (because $$J$$ is closed). The space $$C\bigl([-h,h],\R\bigr)$$ is complete, and a closed subset of a complete metric space is a complete metric space with the subspace metric, see Proposition 7.4.6. So $$Y$$ with the subspace metric is a complete metric space. We will write $$d(f,g) = \snorm{f-g}_{[-h,h]}$$ for this metric.
Define a mapping $$T \colon Y \to C\bigl([-h,h],\R\bigr)$$ by
\begin{equation*} T(f)(x) \coloneqq y_0 + \int_0^x F\bigl(t,f(t)\bigr)\,dt . \end{equation*}
It is an exercise to check that $$T$$ is well-defined, and that for $$f \in Y\text{,}$$ $$T(f)$$ really is in $$C\bigl([-h,h],\R\bigr)\text{.}$$ Let $$f \in Y$$ and $$\abs{x} \leq h\text{.}$$ As $$F$$ is bounded by $$M\text{,}$$ we have
\begin{equation*} \begin{split} \abs{T(f)(x) - y_0} &= \abs{\int_0^x F\bigl(t,f(t)\bigr)\,dt} \\ & \leq \abs{x}M \leq hM \leq \frac{\alpha M}{M+ L\alpha} \leq \alpha . \end{split} \end{equation*}
So $$T(f)\bigl([-h,h]\bigr) \subset [y_0-\alpha,y_0+\alpha] \subset J\text{,}$$ and $$T(f) \in Y\text{.}$$ In other words, $$T(Y) \subset Y\text{.}$$ From now on, we consider $$T$$ as a mapping of $$Y$$ to $$Y\text{.}$$
We claim $$T \colon Y \to Y$$ is a contraction. First, for $$x \in [-h,h]$$ and $$f,g \in Y\text{,}$$ we have
\begin{equation*} \abs{F\bigl(x,f(x)\bigr) - F\bigl(x,g(x)\bigr)} \leq L\abs{f(x)- g(x)} \leq L \, d(f,g) . \end{equation*}
Therefore,
\begin{equation*} \begin{split} \abs{T(f)(x) - T(g)(x)} &= \abs{\int_0^x \Bigl( F\bigl(t,f(t)\bigr) - F\bigl(t,g(t)\bigr)\Bigr)\,dt} \\ & \leq \abs{x} L \, d(f,g) \leq h L\, d(f,g) \leq \frac{L\alpha}{M+L\alpha} \, d(f,g) . \end{split} \end{equation*}
We chose $$M > 0$$ and so $$\frac{L\alpha}{M+L\alpha} < 1\text{.}$$ Take supremum over $$x \in [-h,h]$$ of the left-hand side above to obtain $$d\bigl(T(f),T(g)\bigr) \leq \frac{L\alpha}{M+L\alpha} \, d(f,g)\text{,}$$ that is, $$T$$ is a contraction.
The fixed point theorem (Theorem 7.6.2) gives a unique $$f \in Y$$ such that $$T(f) = f\text{.}$$ In other words,
\begin{equation*} f(x) = y_0 + \int_0^x F\bigl(t,f(t)\bigr)\,dt . \end{equation*}
Clearly, $$f(0) = y_0\text{.}$$ By the fundamental theorem of calculus (Theorem 5.3.3), $$f$$ is differentiable and its derivative is $$F\bigl(x,f(x)\bigr)\text{.}$$ Differentiable functions are continuous, so $$f$$ is the unique differentiable $$f \colon [-h,h] \to J$$ such that $$f'(x) = F\bigl(x,f(x)\bigr)$$ and $$f(0) = y_0\text{.}$$

### Subsection7.6.3Exercises

For more exercises related to Picard’s theorem see Section 6.3.

#### Exercise7.6.1.

Let $$J$$ be a closed and bounded interval and $$Y \coloneqq \bigl\{ f \in C\bigl([-h,h],\R\bigr) : f\bigl([-h,h]\bigr) \subset J \bigr\}\text{.}$$ Show that $$Y \subset C\bigl([-h,h],\R\bigr)$$ is closed. Hint: $$J$$ is closed.

#### Exercise7.6.2.

In the proof of Picard’s theorem, show that if $$f \colon [-h,h] \to J$$ is continuous, then $$F\bigl(t,f(t)\bigr)$$ is continuous on $$[-h,h]$$ as a function of $$t\text{.}$$ Use this to show that
\begin{equation*} T(f)(x) \coloneqq y_0 + \int_0^x F\bigl(t,f(t)\bigr)\,dt \end{equation*}
is well-defined and that $$T(f) \in C\bigl([-h,h],\R\bigr)\text{.}$$

#### Exercise7.6.3.

Prove that in the proof of Picard’s theorem, the statement “Without loss of generality assume $$x_0 = 0$$” is justified. That is, prove that if we know the theorem with $$x_0 = 0\text{,}$$ the theorem is true as stated.

#### Exercise7.6.4.

Let $$F \colon \R \to \R$$ be defined by $$F(x) \coloneqq kx + b$$ where $$0 < k < 1\text{,}$$ $$b \in \R\text{.}$$
1. Show that $$F$$ is a contraction.
2. Find the fixed point and show directly that it is unique.

#### Exercise7.6.5.

Let $$f \colon [0,\nicefrac{1}{4}] \to [0,\nicefrac{1}{4}]$$ be defined by $$f(x) \coloneqq x^2\text{.}$$
1. Show that $$f$$ is a contraction, and find the best (smallest) $$k$$ from the definition that works.
2. Find the fixed point and show directly that it is unique.

#### Exercise7.6.6.

1. Find an example of a contraction $$f \colon X \to X$$ of a non-complete metric space $$X$$ with no fixed point.
2. Find a 1-Lipschitz map $$f \colon X \to X$$ of a complete metric space $$X$$ with no fixed point.

#### Exercise7.6.7.

Consider $$y' =y^2\text{,}$$ $$y(0)=1\text{.}$$ Use the iteration scheme from the proof of the contraction mapping principle. Start with $$f_0(x) = 1\text{.}$$ Find a few iterates (at least up to $$f_2$$). Prove that the pointwise limit of $$f_n$$ is $$\frac{1}{1-x}\text{,}$$ that is, for every $$x$$ with $$\abs{x} < h$$ for some $$h > 0\text{,}$$ prove that $$\lim\limits_{n\to\infty}f_n(x) = \frac{1}{1-x}\text{.}$$

#### Exercise7.6.8.

Suppose $$f \colon X \to X$$ is a contraction for $$k < 1\text{.}$$ Suppose you use the iteration procedure with $$x_{n+1} \coloneqq f(x_n)$$ as in the proof of the fixed point theorem. Suppose $$x$$ is the fixed point of $$f\text{.}$$
1. Show that $$d(x,x_n) \leq k^n d(x_1,x_0) \frac{1}{1-k}$$ for all $$n \in \N\text{.}$$
2. Suppose $$d(y_1,y_2) \leq 16$$ for all $$y_1,y_2 \in X\text{,}$$ and $$k= \nicefrac{1}{2}\text{.}$$ Find an $$N$$ such that starting at any given point $$x_0 \in X\text{,}$$ $$d(x,x_n) \leq 2^{-16}$$ for all $$n \geq N\text{.}$$

#### Exercise7.6.9.

Let $$f(x) \coloneqq x-\frac{x^2-2}{2x}$$ (you may recognize Newton’s method for $$\sqrt{2}$$).
1. Prove $$f\bigl([1,\infty)\bigr) \subset [1,\infty)\text{.}$$
2. Prove that $$f \colon [1,\infty) \to [1,\infty)$$ is a contraction.
3. Show that the fixed point theorem applies, find the unique $$x \geq 1$$ such that $$f(x) = x\text{,}$$ and show that $$x = \sqrt{2}\text{.}$$ Note: In particular, the technique from the proof of the theorem can be used to approximate $$\sqrt{2}\text{.}$$

#### Exercise7.6.10.

Suppose $$f \colon X \to X$$ is a contraction, and $$(X,d)$$ is a metric space with the discrete metric, that is, $$d(x,y) = 1$$ whenever $$x \not= y\text{.}$$ Show that $$f$$ is constant, that is, there exists a $$c \in X$$ such that $$f(x) = c$$ for all $$x \in X\text{.}$$

#### Exercise7.6.11.

Suppose $$(X,d)$$ is a nonempty complete metric space, $$f \colon X \to X$$ is a mapping, and denote by $$f^n$$ the $$n$$th iterate of $$f\text{.}$$ Suppose for every $$n$$ there exists a $$k_n > 0$$ such that $$d\bigl(f^n(x),f^n(y)\bigr) \leq k_n \, d(x,y)$$ for all $$x,y \in X\text{,}$$ where $$\sum_{n=1}^\infty k_n < \infty\text{.}$$ Prove that $$f$$ has a unique fixed point in $$X\text{.}$$
Named after the Polish mathematician Stefan Banach (1892–1945) who first stated the theorem in 1922.
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf