Skip to main content
Logo image

Section 11.5 Maximum principle and the fundamental theorem of algebra

Note: half a lecture, optional
In this section we study the local behavior of polynomials, and analytic functions in general, and the growth of polynomials as \(z\) goes to infinity. As an application we prove the fundamental theorem of algebra: Any nonconstant polynomial has a complex root.

Proof.

We prove this lemma for a polynomial and leave the general case as Exercise 11.5.1. Without loss of generality assume that \(z_0 = 0\) and \(p(0) = 1\text{.}\) Write
\begin{equation*} p(z) = 1+a_kz^k + a_{k+1}z^{k+1} + \cdots + a_d z^d , \end{equation*}
where \(a_k \not= 0\text{.}\) Pick \(t\) such that \(a_k e^{ikt} = -\sabs{a_k}\text{,}\) which we can do by the discussion on trigonometric functions. Suppose \(r > 0\) is small enough such that \(1-r^k \sabs{a_k} > 0\text{.}\) We have
\begin{equation*} p(r e^{it}) = 1-r^k \sabs{a_k} + r^{k+1}a_{k+1}e^{i(k+1)t} + \cdots + r^{d}a_{d}e^{idt} . \end{equation*}
So
\begin{equation*} \begin{split} \abs{ p(r e^{it}) } - \abs{ r^{k+1}a_{k+1}e^{i(k+1)t} + \cdots + r^{d}a_{d}e^{idt} } & \leq \abs{ p(r e^{it}) - r^{k+1}a_{k+1}e^{i(k+1)t} - \cdots - r^{d}a_{d}e^{idt} } \\ & = \abs{ 1-r^k \sabs{a_k} } = 1-r^k \sabs{a_k} . \end{split} \end{equation*}
In other words,
\begin{equation*} \abs{ p(r e^{it}) } \leq 1-r^k \left( \sabs{a_k} - r \abs{ a_{k+1}e^{i(k+1)t} + \cdots + r^{d-k-1}a_{d}e^{idt} } \right) . \end{equation*}
For small enough \(r\text{,}\) the expression in the parentheses is positive as \(\sabs{a_k} > 0\text{.}\) Hence, \(\abs{p(re^{it})} < 1 = p(0)\text{.}\)
What the lemma says is that the only minima the modulus of analytic functions has are precisely at the zeros. It is sometimes called the minimum modulus principle. If \(f\) is analytic and nonzero at a point, then \(\nicefrac{1}{f}\) is analytic near that point. Applying the lemma and the identity theorem, one obtains the maximum modulus principle, or sometimes just the maximum principle.
The details of the proof is left as Exercise 11.5.2.

Remark 11.5.3.

The lemma (and the maximum principle) does not hold if we restrict to the real numbers. For example, \(x^2+1\) has a minimum at \(x=0\text{,}\) but no zero there. There is a \(w\) arbitrarily close to \(0\) such that \(\sabs{w^2+1} < 1\text{,}\) but this \(w\) is necessarily not real. Letting \(w = i\epsilon\) for small \(\epsilon > 0\) works.
The moral of the story is that if \(p(0) = 1\text{,}\) then very close to 0, the series (or polynomial) looks like \(1+az^k\text{,}\) and \(1+az^k\) has no minimum at the origin. All the higher powers of \(z\) are too small to make a difference. For polynomials, we find similar behavior at infinity.

Proof.

Write \(p(z) = a_0 + a_1 z + \cdots + a_d z^d\) and suppose that \(d \geq 1\) and \(a_d \not= 0\text{.}\) Suppose \(\sabs{z} \geq R\) (so also \(\sabs{z}^{-1} \leq R^{-1}\)). We estimate:
\begin{equation*} \begin{split} \sabs{p(z)} & \geq \sabs{a_d z^d} - \sabs{a_0} - \sabs{a_1 z} - \cdots - \sabs{a_{d-1} z^{d-1} } \\ & = \sabs{z}^d \bigl( \sabs{a_d} - \sabs{a_0} \, \sabs{z}^{-d} - \sabs{a_1} \, \sabs{z}^{-d+1} - \cdots - \sabs{a_{d-1}} \, \sabs{z}^{-1} \bigr) \\ & \geq R^d \bigl(\sabs{a_d} - \sabs{a_0}R^{-d} - \sabs{a_1}R^{1-d} - \cdots - \sabs{a_{d-1}}R^{-1} \bigr) . \end{split} \end{equation*}
Then the expression in parentheses is eventually positive for large enough \(R\text{.}\) In particular, for large enough \(R\) we get that this expression is greater than \(\frac{\sabs{a_d}}{2}\text{,}\) and so
\begin{equation*} \sabs{p(z)} \geq R^d \frac{\sabs{a_d}}{2} . \end{equation*}
Therefore, we can pick \(R\) large enough to be bigger than a given \(M\text{.}\)
This second lemma does not generalize to analytic functions, even those defined on the entire plane \(\C\text{.}\) The function \(\cos(z)\) is a counterexample. We had to look at the term with the largest degree, and we only have such a term for a polynomial. In fact, something that we will not prove is that an analytic function defined on all of \(\C\) satisfying the conclusion of the lemma must be a polynomial.
The moral of the story here is that for very large \(\sabs{z}\) (far away from the origin) a polynomial of degree \(d\) really looks like a constant multiple of \(z^d\text{.}\)

Proof.

Let \(\mu \coloneqq \inf \bigl\{ \sabs{p(z)} : z \in \C \bigr\}\text{.}\) Find an \(R\) such that for all \(z\) with \(\sabs{z} \geq R\text{,}\) we have \(\sabs{p(z)} \geq \mu+1\text{.}\) Therefore, every \(z\) with \(\sabs{p(z)}\) close to \(\mu\) must be in the closed ball \(C(0,R) = \bigl\{ z \in \C : \sabs{z} \leq R \bigr\}\text{.}\) As \(\sabs{p(z)}\) is a continuous real-valued function, it achieves its minimum on the compact set \(C(0,R)\) (closed and bounded) and this minimum must be \(\mu\text{.}\) So there is a \(z_0 \in C(0,R)\) such that \(\sabs{p(z_0)} = \mu\text{.}\) As that is a minimum of \(\sabs{p(z)}\) on \(\C\text{,}\) then by the first lemma above, we have \(\sabs{p(z_0)} = 0\text{.}\)
The fundamental theorem also does not generalize to analytic functions. The exponential \(e^{z}\) is an analytic function on \(\C\) with no zeros.

Subsection 11.5.1 Exercises

Exercise 11.5.1.

Prove Lemma 11.5.1 for an analytic function. That is, suppose that \(p(z)\) is a nonconstant power series converging in \(B(z_0,\epsilon)\text{.}\)

Exercise 11.5.3.

Let \(U \subset \C\) be open and \(z_0 \in U\text{.}\) Suppose \(f \colon U \to \C\) is analytic and \(f(z_0) = 0\text{.}\) Show that there exists an \(\epsilon > 0\) such that either \(f(z) \not= 0\) for all \(z\) with \(0 < \sabs{z} < \epsilon\) or \(f(z) = 0\) for all \(z \in B(z_0,\epsilon)\text{.}\) In other words, zeros of analytic functions are isolated. Of course, same holds for polynomials.
A rational function is a function \(f(z) \coloneqq \frac{p(z)}{q(z)}\) where \(p\) and \(q\) are polynomials and \(q\) is not identically zero. A point \(z_0 \in \C\) where \(f(z_0) = 0\) (and therefore \(p(z_0) = 0\)) is called a zero. A point \(z_0 \in \C\) is called an singularity of \(f\) if \(q(z_0) = 0\text{.}\) As all zeros are isolated and so all singularities of rational functions are isolated and so are called an isolated singularity. An isolated singularity is called removable if \(\lim_{z \mapsto z_0} f(z)\) exists. An isolated singularity is called a pole if \(\lim_{z \mapsto z_0} \sabs{f(z)} = \infty\text{.}\) We say \(f\) has pole at \(\infty\) if
\begin{equation*} \lim_{z \to \infty} \sabs{f(z)} = \infty , \end{equation*}
that is, if for every \(M > 0\) there exists an \(R > 0\) such that \(\sabs{f(z)} > M\) for all \(z\) with \(\sabs{z} > R\text{.}\)

Exercise 11.5.4.

Show that a rational function which is not identically zero has at most finitely many zeros and singularities. In fact, show that if \(p\) is a polynomial of degree \(n > 0\) it has at most \(n\) zeros.
Hint: If \(z_0\) is a zero of \(p\text{,}\) then without loss of generality assume \(z_0 = 0\text{.}\) Then use induction.

Exercise 11.5.5.

Prove that if \(z_0\) is a removable singularity of a rational function \(f(z) \coloneqq \frac{p(z)}{q(z)}\text{,}\) then there exist polynomials \(\widetilde{p}\) and \(\widetilde{q}\) such that \(\widetilde{q}(z_0) \not= 0\) and \(f(z) = \frac{\widetilde{p}(z)}{\widetilde{q}(z)}\text{.}\)
Hint: Without loss of generality assume \(z_0 = 0\text{.}\)

Exercise 11.5.6.

Given a rational function \(f\) with an isolated singularity at \(z_0\text{,}\) show that \(z_0\) is either removable or a pole.
Hint: See the previous exercise.

Exercise 11.5.7.

Let \(f\) be a rational function and \(S \subset \C\) is the set of the singularities of \(f\text{.}\) Prove that \(f\) is equal to a polynomial on \(\C \setminus S\) if and only if \(f\) has a pole at infinity and all the singularities are removable.
Hint: See previous exercises.
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf