Skip to main content
Logo image

Section 3.3 Extreme and intermediate value theorems

Note: 1.5 lectures
Continuous functions on closed and bounded intervals are quite well behaved.

Subsection 3.3.1 Min-max or extreme value theorem

Recall that \(f \colon [a,b] \to \R\) is bounded if there exists a \(B \in \R\) such that \(\abs{f(x)} \leq B\) for all \(x \in [a,b]\text{.}\) For a continuous function on a closed and bounded interval, we have the following lemma.

Proof.

We prove the claim by contrapositive. Suppose \(f\) is not bounded. Then for each \(n \in \N\text{,}\) there is an \(x_n \in [a,b]\text{,}\) such that
\begin{equation*} \abs{f(x_n)} \geq n . \end{equation*}
The sequence \(\{ x_n \}_{n=1}^\infty\) is bounded as \(a \leq x_n \leq b\text{.}\) By the Bolzano–Weierstrass theorem, there is a convergent subsequence \(\{ x_{n_i} \}_{i=1}^\infty\text{.}\) Let \(x \coloneqq \lim_{i\to\infty} x_{n_i}\text{.}\) Since \(a \leq x_{n_i} \leq b\) for all \(i\text{,}\) then \(a \leq x \leq b\text{.}\) The sequence \(\bigl\{ f(x_{n_i}) \bigr\}_{i=1}^\infty\) is not bounded as \(\abs{f(x_{n_i})} \geq n_i \geq i\text{.}\) Thus \(f\) is not continuous at \(x\) as
\begin{equation*} f(x) = f\Bigl( \lim_{i\to\infty} x_{n_i} \Bigr) , \qquad \text{but} \qquad \lim_{i\to\infty} f(x_{n_i}) \enspace \text{does not exist.} \qedhere \end{equation*}
Notice a key point of the proof. Boundedness of \([a,b]\) allows us to use Bolzano–Weierstrass, while the fact that it is closed gives us that the limit is back in \([a,b]\text{.}\) The technique is a common one: Find a sequence with a certain property, then use Bolzano–Weierstrass to make such a sequence that also converges.
Recall from calculus that \(f \colon S \to \R\) achieves an absolute minimum at \(c \in S\) if
\begin{equation*} f(x) \geq f(c) \qquad \text{for all } x \in S. \end{equation*}
On the other hand, \(f\) achieves an absolute maximum at \(c \in S\) if
\begin{equation*} f(x) \leq f(c) \qquad \text{for all } x \in S. \end{equation*}
If such a \(c \in S\) exists, then we say \(f\) achieves an absolute minimum (resp. absolute maximum) on \(S\), and we call \(f(c)\) the absolute minimum (resp. absolute maximum).

Figure 3.6. \(f \colon [a,b] \to \R\) achieves an absolute maximum \(f(c)\) at \(c\text{,}\) and an absolute minimum \(f(d)\) at \(d\text{.}\)

If \(S\) is a closed and bounded interval, then a continuous \(f\) is not just bounded, it must achieve an absolute minimum and an absolute maximum on \(S\text{.}\)
Again, we remark that is important that the domain of \(f\) is a closed and bounded interval \([a,b]\text{.}\)

Proof.

The lemma says that \(f\) is bounded, so the set \(f\bigl([a,b]\bigr) = \bigl\{ f(x) : x \in [a,b] \bigr\}\) has a supremum and an infimum. There exist sequences in the set \(f\bigl([a,b]\bigr)\) that approach its supremum and its infimum. That is, there are sequences \(\bigl\{ f(x_n) \bigr\}_{n=1}^\infty\) and \(\bigl\{ f(y_n) \bigr\}_{n=1}^\infty\text{,}\) where \(x_n\) and \(y_n\) are in \([a,b]\text{,}\) such that
\begin{equation*} \lim_{n\to\infty} f(x_n) = \inf f\bigl([a,b]\bigr) \qquad \text{and} \qquad \lim_{n\to\infty} f(y_n) = \sup f\bigl([a,b]\bigr). \end{equation*}
We are not done yet; we need to find where the minima and the maxima are. The problem is that the sequences \(\{ x_n \}_{n=1}^\infty\) and \(\{ y_n \}_{n=1}^\infty\) need not converge. We know \(\{ x_n \}_{n=1}^\infty\) and \(\{ y_n \}_{n=1}^\infty\) are bounded (their elements belong to a bounded interval \([a,b]\)). Apply the Bolzano–Weierstrass theorem, to find convergent subsequences \(\{ x_{n_i} \}_{i=1}^\infty\) and \(\{ y_{m_i} \}_{i=1}^\infty\text{.}\) Let
\begin{equation*} x \coloneqq \lim_{i\to\infty} x_{n_i} \qquad \text{and} \qquad y \coloneqq \lim_{i\to\infty} y_{m_i}. \end{equation*}
As \(a \leq x_{n_i} \leq b\) for all \(i\text{,}\) we have \(a \leq x \leq b\text{.}\) Similarly, \(a \leq y \leq b\text{.}\) So \(x\) and \(y\) are in \([a,b]\text{.}\) A limit of a subsequence is the same as the limit of the sequence, and we can take a limit past the continuous function \(f\text{:}\)
\begin{equation*} \inf f\bigl([a,b]\bigr) = \lim_{n\to\infty} f(x_n) = \lim_{i\to\infty} f(x_{n_i}) = f \Bigl( \lim_{i\to\infty} x_{n_i} \Bigr) = f(x) . \end{equation*}
Similarly,
\begin{equation*} \sup f\bigl([a,b]\bigr) = \lim_{n\to\infty} f(y_n) = \lim_{i\to\infty} f(y_{m_i}) = f \Bigl( \lim_{i\to\infty} y_{m_i} \Bigr) = f(y) . \end{equation*}
Hence, \(f\) achieves an absolute minimum at \(x\) and an absolute maximum at \(y\text{.}\)

Example 3.3.3.

The function \(f(x) \coloneqq x^2+1\) defined on the interval \([-1,2]\) achieves a minimum at \(x=0\) when \(f(0) = 1\text{.}\) It achieves a maximum at \(x=2\) where \(f(2) = 5\text{.}\) Do note that the domain of definition matters. If we instead took the domain to be \([-10,10]\text{,}\) then \(f\) would no longer have a maximum at \(x=2\text{.}\) Instead, the maximum would be achieved at either \(x=10\) or \(x=-10\text{.}\)
We show by examples that the different hypotheses of the theorem are truly needed.

Example 3.3.4.

The function \(f \colon \R \to \R\) defined by \(f(x) \coloneqq x\) achieves neither a minimum, nor a maximum. So it is important that we are looking at a bounded interval.

Example 3.3.5.

The function \(f \colon (0,1) \to \R\) defined by \(f(x) \coloneqq \nicefrac{1}{x}\) achieves neither a minimum, nor a maximum. It is continuous, but \((0,1)\) is not closed. The values of the function are unbounded as we approach \(0\text{.}\) Also as we approach \(x=1\text{,}\) the values of the function approach \(1\text{,}\) but \(f(x) > 1\) for all \(x \in (0,1)\text{.}\) There is no \(x \in (0,1)\) such that \(f(x) = 1\text{.}\) So it is important that we are looking at a closed interval.

Example 3.3.6.

Continuity is important. Define \(f \colon [0,1] \to \R\) by \(f(x) \coloneqq \nicefrac{1}{x}\) for \(x > 0\) and let \(f(0) \coloneqq 0\text{.}\) The function does not achieve a maximum. The domain \([0,1]\) is closed and bounded, but the problem is that the function is not continuous at 0.

Subsection 3.3.2 Bolzano’s intermediate value theorem

Bolzano’s intermediate value theorem is one of the cornerstones of analysis. It is sometimes only called the intermediate value theorem, or just Bolzano’s theorem. To prove Bolzano’s theorem we prove the following simpler lemma.

Proof.

We define two sequences \(\{ a_n \}_{n=1}^\infty\) and \(\{ b_n \}_{n=1}^\infty\) inductively:
  1. Let \(a_1 \coloneqq a\) and \(b_1 \coloneqq b\text{.}\)
  2. If \(f\left(\frac{a_n+b_n}{2}\right) \geq 0\text{,}\) let \(a_{n+1} \coloneqq a_n\) and \(b_{n+1} \coloneqq \frac{a_n+b_n}{2}\text{.}\)
  3. If \(f\left(\frac{a_n+b_n}{2}\right) < 0\text{,}\) let \(a_{n+1} \coloneqq \frac{a_n+b_n}{2}\) and \(b_{n+1} \coloneqq b_n\text{.}\)

Figure 3.7. Finding roots (bisection method).

See Figure 3.7 for an example defining the first five steps. If \(a_n < b_n\text{,}\) then \(a_n < \frac{a_n+b_n}{2} < b_n\text{.}\) So \(a_{n+1} < b_{n+1}\text{.}\) Thus by induction \(a_n < b_n\) for all \(n\text{.}\) Furthermore, \(a_n \leq a_{n+1}\) and \(b_n \geq b_{n+1}\) for all \(n\text{,}\) that is, the sequences are monotone. As \(a_n < b_n \leq b_1 = b\) and \(b_n > a_n \geq a_1 = a\) for all \(n\text{,}\) the sequences are also bounded. Therefore, the sequences converge. Let \(c \coloneqq \lim_{n\to\infty} a_n\) and \(d \coloneqq \lim_{n\to\infty} b_n\text{,}\) where also \(a \leq c \leq d \leq b\text{.}\) We need to show that \(c=d\text{.}\) Notice
\begin{equation*} b_{n+1} - a_{n+1} = \frac{b_n-a_n}{2}. \end{equation*}
\begin{equation*} b_n - a_n = \frac{b_1-a_1}{2^{n-1}} = 2^{1-n} (b-a) . \end{equation*}
As \(2^{1-n}(b-a)\) converges to zero, we take the limit as \(n\) goes to infinity to get
\begin{equation*} d-c = \lim_{n\to\infty} (b_n - a_n) = \lim_{n\to\infty} 2^{1-n} (b-a) = 0. \end{equation*}
In other words, \(d=c\text{.}\)
By construction, for all \(n\text{,}\)
\begin{equation*} f(a_n) < 0 \qquad \text{and} \qquad f(b_n) \geq 0 . \end{equation*}
Since \(\lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = c\) and \(f\) is continuous at \(c\text{,}\) we may take limits in those inequalities:
\begin{equation*} f(c) = \lim_{n\to\infty} f(a_n) \leq 0 \qquad \text{and} \qquad f(c) = \lim_{n\to\infty} f(b_n) \geq 0 . \end{equation*}
As \(f(c) \geq 0\) and \(f(c) \leq 0\text{,}\) we conclude \(f(c) = 0\text{.}\) Thus also \(c \not=a\) and \(c \not= b\text{,}\) so \(a < c < b\text{.}\)
The theorem says that a continuous function on a closed interval achieves all the values between the values at the endpoints.

Proof.

If \(f(a) < y < f(b)\text{,}\) then define \(g(x) \coloneqq f(x)-y\text{.}\) Then \(g(a) < 0\) and \(g(b) > 0\text{,}\) and we apply Lemma 3.3.7 to \(g\) to find \(c\text{.}\) If \(g(c) = 0\text{,}\) then \(f(c) = y\text{.}\)
Similarly, if \(f(a) > y > f(b)\text{,}\) then define \(g(x) \coloneqq y-f(x)\text{.}\) Again, \(g(a) < 0\) and \(g(b) > 0\text{,}\) and we apply Lemma 3.3.7 to find \(c\text{.}\) As before, if \(g(c) = 0\text{,}\) then \(f(c) = y\text{.}\)
If a function is continuous, then the restriction to a subset is continuous; if \(f \colon S \to \R\) is continuous and \([a,b] \subset S\text{,}\) then \(f|_{[a,b]}\) is also continuous. We generally apply the theorem to a function continuous on some large set \(S\text{,}\) but we restrict our attention to an interval.
The proof of the lemma tells us how to find the root \(c\text{.}\) The proof is not only useful for us pure mathematicians, it is a useful idea in applied mathematics, where it is called the bisection method.

Example 3.3.9.

(Bisection method)   The polynomial \(f(x) \coloneqq x^3-2x^2+x-1\) has a real root in \((1,2)\text{.}\) We simply notice that \(f(1) = -1\) and \(f(2) = 1\text{.}\) Hence there must exist a point \(c \in (1,2)\) such that \(f(c) = 0\text{.}\) To find a better approximation of the root we follow the proof of Lemma 3.3.7. We look at 1.5 and find that \(f(1.5) = -0.625\text{.}\) Therefore, there is a root of the polynomial in \((1.5,2)\text{.}\) Next we look at 1.75 and note that \(f(1.75) \approx -0.016\text{.}\) Hence there is a root of \(f\) in \((1.75,2)\text{.}\) Next we look at 1.875 and find that \(f(1.875) \approx 0.44\text{,}\) thus there is a root in \((1.75,1.875)\text{.}\) We follow this procedure until we gain sufficient precision. In fact, the root is at \(c \approx 1.7549\text{.}\)
The technique is the simplest method of finding roots of polynomials, a common problem in applied mathematics. In general, finding roots is hard to do quickly, precisely, and automatically. There are other, faster methods of finding roots of polynomials, such as Newton’s method. One advantage of the method above is its simplicity. The moment we find an interval where the intermediate value theorem applies, we are guaranteed to find a root up to a desired precision in finitely many steps. Furthermore, the bisection method finds roots of any continuous function, not just a polynomial.
The theorem guarantees one \(c\) such that \(f(c) = y\text{,}\) but there may be other roots of the equation \(f(c) = y\text{.}\) If we follow the procedure of the proof, we are guaranteed to find approximations to one such root. We need to work harder to find any other roots.
Polynomials of even degree may not have any real roots. There is no real number \(x\) such that \(x^2+1 = 0\text{.}\) Odd polynomials, on the other hand, always have at least one real root.

Proof.

Suppose \(f\) is a polynomial of odd degree \(d\text{.}\) We write
\begin{equation*} f(x) = a_d x^d + a_{d-1} x^{d-1} + \cdots + a_1 x + a_0 , \end{equation*}
where \(a_d \not= 0\text{.}\) We divide by \(a_d\) to obtain a monic polynomial 1 
\begin{equation*} g(x) \coloneqq x^d + b_{d-1} x^{d-1} + \cdots + b_1 x + b_0 , \end{equation*}
where \(b_k = \nicefrac{a_k}{a_d}\text{.}\) Let us show that \(g(n)\) is positive for some large \(n \in \N\text{.}\) We first compare the highest order term with the rest:
\begin{equation*} \begin{split} \abs{\frac{b_{d-1} n^{d-1} + \cdots + b_1 n + b_0}{n^d}} & = \frac{\abs{b_{d-1} n^{d-1} + \cdots + b_1 n + b_0}}{n^d} \\ & \leq \frac{\abs{b_{d-1}} n^{d-1} + \cdots + \abs{b_1} n + \abs{b_0}}{n^d} \\ & \leq \frac{\abs{b_{d-1}} n^{d-1} + \cdots + \abs{b_1} n^{d-1} + \abs{b_0} n^{d-1}}{n^d} \\ & = \frac{n^{d-1}\bigl(\abs{b_{d-1}} + \cdots + \abs{b_1} + \abs{b_0}\bigr)}{n^d} \\ & = \frac{1}{n} \bigl(\abs{b_{d-1}} + \cdots + \abs{b_1} + \abs{b_0}\bigr) . \end{split} \end{equation*}
Therefore,
\begin{equation*} \lim_{n\to\infty} \frac{b_{d-1} n^{d-1} + \cdots + b_1 n + b_0}{n^d} = 0 . \end{equation*}
Thus there exists an \(M \in \N\) such that
\begin{equation*} \abs{\frac{b_{d-1} M^{d-1} + \cdots + b_1 M + b_0}{M^d}} < 1 , \end{equation*}
which implies
\begin{equation*} -(b_{d-1} M^{d-1} + \cdots + b_1 M + b_0) < M^d . \end{equation*}
Therefore, \(g(M) > 0\text{.}\)
Next, consider \(g(-n)\) for \(n \in \N\text{.}\) By a similar argument, there exists a \(K \in \N\) such that \(b_{d-1} {(-K)}^{d-1} + \cdots + b_1 (-K) + b_0 < K^d\) and therefore \(g(-K) < 0\) (see Exercise 3.3.5). In the proof, make sure you use the fact that \(d\) is odd. In particular, if \(d\) is odd, then \({(-n)}^d = -(n^d)\text{.}\)
We appeal to the intermediate value theorem to find a \(c \in (-K,M)\text{,}\) such that \(g(c) = 0\text{.}\) As \(g(x) = \frac{f(x)}{a_d}\text{,}\) then \(f(c) = 0\text{,}\) and the proof is done.

Example 3.3.11.

You may recall how hard we worked to show that \(\sqrt{2}\) in Example 1.2.3. With Bolzano’s theorem, we can prove the existance \(k\)th root of any positive number \(y > 0\) without any effort. We claim that for any \(k \in \N\) and any \(y > 0\text{,}\) there exists a number \(x > 0\) such that \(x^k = y\text{.}\)
Proof: If \(y=1\text{,}\) then it is clear, so assume \(y\not= 1\text{.}\) Let \(f(x) \coloneqq x^k - y\text{.}\) We notice \(f(0) = -y < 0\text{.}\) If \(y < 1\text{,}\) then \(f(1) = 1^k -y > 0\text{.}\) If \(y > 1\text{,}\) then \(f(y) = y^k-y = y(y^{k-1}-1) > 0\text{.}\) In either case, apply Bolzano’s theorem to find an \(x > 0\) such that \(f(x) = 0\text{,}\) or in other words \(x^k = y\text{.}\)

Example 3.3.12.

Interestingly, there exist discontinuous functions with the intermediate value property. The function
\begin{equation*} f(x) \coloneqq \begin{cases} \sin(\nicefrac{1}{x}) & \text{if } x \not= 0, \\ 0 & \text{if } x=0, \end{cases} \end{equation*}
is not continuous at \(0\text{;}\) however, \(f\) has the intermediate value property: Whenever \(a < b\) and \(y\) is such that \(f(a) < y < f(b)\) or \(f(a) > y > f(b)\text{,}\) there exists a \(c \in (a,b)\) such that \(f(c) = y\text{.}\) See Figure 3.2 for a graph of \(\sin(\nicefrac{1}{x})\text{.}\) Proof is left as Exercise 3.3.4.
The intermediate value theorem says that if \(f \colon [a,b] \to \R\) is continuous, then \(f\bigl([a,b]\bigr)\) contains all the values between \(f(a)\) and \(f(b)\text{.}\) In fact, more is true. Combining all the results of this section one can prove the following useful corollary whose proof is left as an exercise. Hint: See Figure 3.8 and notice what the endpoints of the image interval are.

Figure 3.8. The image of a continuous \(f \colon [a,b] \to \R\text{.}\)

Subsection 3.3.3 Exercises

Exercise 3.3.1.

Find an example of a discontinuous function \(f \colon [0,1] \to \R\) where the conclusion of the intermediate value theorem fails.

Exercise 3.3.2.

Find an example of a bounded discontinuous function \(f \colon [0,1] \to \R\) that has neither an absolute minimum nor an absolute maximum.

Exercise 3.3.3.

Let \(f \colon (0,1) \to \R\) be a continuous function such that \(\displaystyle \lim_{x\to 0} f(x) = \displaystyle \lim_{x\to 1} f(x) = 0\text{.}\) Show that \(f\) achieves either an absolute minimum or an absolute maximum on \((0,1)\) (but perhaps not both).

Exercise 3.3.4.

Let
\begin{equation*} f(x) \coloneqq \begin{cases} \sin(\nicefrac{1}{x}) & \text{if } x \not= 0, \\ 0 & \text{if } x=0. \end{cases} \end{equation*}
Show that \(f\) has the intermediate value property. That is, whenever \(a < b\text{,}\) if there exists a \(y\) such that \(f(a) < y < f(b)\) or \(f(a) > y > f(b)\text{,}\) then there exists a \(c \in (a,b)\) such that \(f(c) = y\text{.}\)

Exercise 3.3.5.

Suppose \(g(x)\) is a monic polynomial of odd degree \(d\text{,}\) that is,
\begin{equation*} g(x) = x^d + b_{d-1} x^{d-1} + \cdots + b_1 x + b_0 , \end{equation*}
for some real numbers \(b_{0}, b_1, \ldots, b_{d-1}\text{.}\) Show that there exists a \(K \in \N\) such that \(g(-K) < 0\text{.}\) Hint: Make sure to use the fact that \(d\) is odd. You will have to use that \({(-n)}^d = -(n^d)\text{.}\)

Exercise 3.3.6.

Suppose \(g(x)\) is a monic polynomial of positive even degree \(d\text{,}\) that is,
\begin{equation*} g(x) = x^d + b_{d-1} x^{d-1} + \cdots + b_1 x + b_0 , \end{equation*}
for some real numbers \(b_{0}, b_1, \ldots, b_{d-1}\text{.}\) Suppose \(g(0) < 0\text{.}\) Show that \(g\) has at least two distinct real roots.

Exercise 3.3.7.

Prove Corollary 3.3.13: Suppose \(f \colon [a,b] \to \R\) is a continuous function. Prove that the direct image \(f\bigl([a,b]\bigr)\) is a closed and bounded interval or a single number.

Exercise 3.3.8.

Suppose \(f \colon \R \to \R\) is continuous and periodic with period \(P > 0\text{.}\) That is, \(f(x+P) = f(x)\) for all \(x \in \R\text{.}\) Show that \(f\) achieves an absolute minimum and an absolute maximum.

Exercise 3.3.9.

(Challenging)   Suppose \(f(x)\) is a bounded polynomial, in other words, there is an \(M\) such that \(\abs{f(x)} \leq M\) for all \(x \in \R\text{.}\) Prove that \(f\) must be a constant.

Exercise 3.3.10.

Suppose \(f \colon [0,1] \to [0,1]\) is continuous. Show that \(f\) has a fixed point, in other words, show that there exists an \(x \in [0,1]\) such that \(f(x) = x\text{.}\)

Exercise 3.3.11.

Find an example of a continuous bounded function \(f \colon \R \to \R\) that does not achieve an absolute minimum nor an absolute maximum on \(\R\text{.}\)

Exercise 3.3.12.

Suppose \(f \colon \R \to \R\) is continuous such that \(x \leq f(x) \leq x+1\) for all \(x \in \R\text{.}\) Find \(f(\R)\text{.}\)

Exercise 3.3.13.

True/False, prove or find a counterexample. If \(f \colon \R \to \R\) is a continuous function such that \(f|_{\Z}\) is bounded, then \(f\) is bounded.

Exercise 3.3.14.

Suppose \(f \colon [0,1] \to (0,1)\) is a bijection. Prove that \(f\) is not continuous.

Exercise 3.3.15.

Suppose \(f \colon \R \to \R\) is continuous.
  1. Prove that if there is a \(c\) such that \(f(c)f(-c) < 0\text{,}\) then there is a \(d \in \R\) such that \(f(d) = 0\text{.}\)
  2. Find a continuous function \(f\) such that \(f(\R) = \R\text{,}\) but \(f(x)f(-x) \geq 0\) for all \(x \in \R\text{.}\)

Exercise 3.3.16.

Suppose \(g(x)\) is a monic polynomial of even degree \(d\text{,}\) that is,
\begin{equation*} g(x) = x^d + b_{d-1} x^{d-1} + \cdots + b_1 x + b_0 , \end{equation*}
for some real numbers \(b_{0}, b_1, \ldots, b_{d-1}\text{.}\) Show that \(g\) achieves an absolute minimum on \(\R\text{.}\)

Exercise 3.3.17.

Suppose \(f(x)\) is a polynomial of degree \(d\) and \(f(\R) = \R\text{.}\) Show that \(d\) is odd.
The word monic means that the coefficient of \(x^d\) is 1.
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf