Skip to main content

Section 2.6 More on series

Note: up to 2–3 lectures (optional, can safely be skipped or covered partially)

Subsection 2.6.1 Root test

A test similar to the ratio test is the so-called root test. In fact, the proof of this test is similar and somewhat easier. Again, the idea is to generalize what happens for the geometric series.

Proof.

If \(L > 1\text{,}\) then there exists 1  a subsequence \(\{ x_{n_k} \}\) such that \(L = \lim_{k\to\infty} \, {\abs{x_{n_k}}}^{1/n_k}\text{.}\) Let \(r\) be such that \(L > r > 1\text{.}\) There exists an \(M\) such that for all \(k \geq M\text{,}\) we have \({\abs{x_{n_k}}}^{1/n_k} > r > 1\text{,}\) or in other words \(\abs{x_{n_k}} > r^{n_k} > 1\text{.}\) The subsequence \(\{ \abs{x_{n_k}} \}\text{,}\) and therefore also \(\{ \abs{x_{n}} \}\text{,}\) cannot possibly converge to zero, and so the series diverges.

Now suppose \(L < 1\text{.}\) Pick \(r\) such that \(L < r < 1\text{.}\) By definition of limit supremum, there is an \(M\) such that for all \(n \geq M\text{,}\)

\begin{equation*} \sup \bigl\{ {\abs{x_k}}^{1/k} : k \geq n \bigr\} < r . \end{equation*}

Therefore, for all \(n \geq M\text{,}\)

\begin{equation*} {\abs{x_n}}^{1/n} < r , \qquad \text{or in other words} \qquad \abs{x_n} < r^n . \end{equation*}

Let \(k > M\text{,}\) and estimate the \(k\)th partial sum:

\begin{equation*} \sum_{n=1}^k \abs{x_n} = \left( \sum_{n=1}^M \abs{x_n} \right) + \left( \sum_{n=M+1}^k \abs{x_n} \right) \leq \left( \sum_{n=1}^M \abs{x_n} \right) + \left( \sum_{n=M+1}^k r^n \right) . \end{equation*}

As \(0 < r < 1\text{,}\) the geometric series \(\sum_{n=M+1}^\infty r^n\) converges to \(\frac{r^{M+1}}{1-r}\text{.}\) As everything is positive,

\begin{equation*} \sum_{n=1}^k \abs{x_n} \leq \left( \sum_{n=1}^M \abs{x_n} \right) + \frac{r^{M+1}}{1-r} . \end{equation*}

Thus the sequence of partial sums of \(\sum \abs{x_n}\) is bounded, and the series converges. Therefore, \(\sum x_n\) converges absolutely.

Subsection 2.6.2 Alternating series test

The tests we have seen so far only addressed absolute convergence. The following test gives a large supply of conditionally convergent series.

Proof.

Let \(s_m := \sum_{k=1}^m {(-1)}^k x_k\) be the \(m\)th partial sum. Then write

\begin{equation*} s_{2n} = \sum_{k=1}^{2n} {(-1)}^k x_k = (-x_1 + x_2) + \cdots + (-x_{2n-1} + x_{2n}) = \sum_{k=1}^{n} (-x_{2k-1} + x_{2k}) . \end{equation*}

The sequence \(\{ x_k \}\) is decreasing and so \((-x_{2k-1}+x_{2k}) \leq 0\) for all \(k\text{.}\) Therefore, the subsequence \(\{ s_{2n} \}\) of partial sums is a decreasing sequence. Similarly, \((x_{2k}-x_{2k+1}) \geq 0\text{,}\) and so

\begin{equation*} s_{2n} = - x_1 + ( x_2 - x_3 ) + \cdots + ( x_{2n-2} - x_{2n-1} ) + x_{2n} \geq -x_1 . \end{equation*}

The sequence \(\{ s_{2n} \}\) is decreasing and bounded below, so it converges. Let \(a := \lim\, s_{2n}\text{.}\)

We wish to show that \(\lim\, s_m = a\) (and not just for the subsequence). Notice

\begin{equation*} s_{2n+1} = s_{2n} + x_{2n+1} . \end{equation*}

Given \(\epsilon > 0\text{,}\) pick \(M\) such that \(\abs{s_{2n}-a} < \nicefrac{\epsilon}{2}\) whenever \(2n \geq M\text{.}\) Since \(\lim\, x_n = 0\text{,}\) we also make \(M\) possibly larger to obtain \(x_{2n+1} < \nicefrac{\epsilon}{2}\) whenever \(2n \geq M\text{.}\) If \(2n \geq M\text{,}\) we have \(\abs{s_{2n}-a} < \nicefrac{\epsilon}{2} < \epsilon\text{,}\) so we just need to check the situation for \(s_{2n+1}\text{:}\)

\begin{equation*} \abs{s_{2n+1}-a} = \abs{s_{2n}-a + x_{2n+1}} \leq \abs{s_{2n}-a} + x_{2n+1} < \nicefrac{\epsilon}{2}+ \nicefrac{\epsilon}{2} = \epsilon . \qedhere \end{equation*}

Notably, there exist conditionally convergent series where the absolute values of the terms go to zero arbitrarily slowly. The series

\begin{equation*} \sum_{n=1}^\infty \frac{{(-1)}^n}{n^p} \end{equation*}

converges for arbitrarily small \(p > 0\text{,}\) but it does not converge absolutely when \(p \leq 1\text{.}\)

Subsection 2.6.3 Rearrangements

Absolutely convergent series behave as we imagine they should. For example, absolutely convergent series can be summed in any order whatsoever. Nothing of the sort holds for conditionally convergent series (see Example 2.6.4 and Exercise 2.6.3).

Consider a series

\begin{equation*} \sum_{n=1}^\infty x_n . \end{equation*}

Given a bijective function \(\sigma \colon \N \to \N\text{,}\) the corresponding rearrangement is the following series:

\begin{equation*} \sum_{k=1}^\infty x_{\sigma(k)} . \end{equation*}

We simply sum the series in a different order.

In other words, a rearrangement of an absolutely convergent series converges (absolutely) to the same number.

Proof.

Let \(\epsilon > 0\) be given. As \(\sum x_n\) is absolutely convergent, take \(M\) such that

\begin{equation*} \abs{\left(\sum_{n=1}^M x_n \right) - x} < \frac{\epsilon}{2} \qquad \text{and} \qquad \sum_{n=M+1}^\infty \abs{x_n} < \frac{\epsilon}{2} . \end{equation*}

As \(\sigma\) is a bijection, there exists a number \(K\) such that for each \(n \leq M\text{,}\) there exists \(k \leq K\) such that \(\sigma(k) = n\text{.}\) In other words \(\{ 1,2,\ldots,M \} \subset \sigma\bigl(\{ 1,2,\ldots,K \} \bigr)\text{.}\)

For \(N \geq K\text{,}\) let \(Q := \max \sigma\bigl(\{ 1,2,\ldots,N \}\bigr)\text{.}\) Compute

\begin{equation*} \begin{split} \abs{\left( \sum_{n=1}^N x_{\sigma(n)} \right) - x} & = \abs{ \left( \sum_{n=1}^M x_n + \sum_{\substack{n=1\\\sigma(n) > M}}^N x_{\sigma(n)} \right) - x} \\ & \leq \abs{ \left( \sum_{n=1}^M x_n \right) - x} + \sum_{\substack{n=1\\\sigma(n) > M}}^N \abs{x_{\sigma(n)}} \\ & \leq \abs{ \left( \sum_{n=1}^M x_n \right) - x} + \sum_{n=M+1}^Q \abs{x_{n}} \\ & < \nicefrac{\epsilon}{2} + \nicefrac{\epsilon}{2} = \epsilon . \end{split} \end{equation*}

So \(\sum x_{\sigma(n)}\) converges to \(x\text{.}\) To see that the convergence is absolute, we apply the argument above to \(\sum \abs{x_n}\) to show that \(\sum \abs{x_{\sigma(n)}}\) converges.

Example 2.6.4.

Let us show that the alternating harmonic series \(\sum \frac{{(-1)}^{n+1}}{n}\text{,}\) which does not converge absolutely, can be rearranged to converge to anything. The odd terms and the even terms diverge to plus infinity and minus infinity respectively (prove this!):

\begin{equation*} \sum_{m=1}^\infty \frac{1}{2m-1} = \infty, \qquad \text{and} \qquad \sum_{m=1}^\infty \frac{-1}{2m} = -\infty . \end{equation*}

Let \(a_n := \frac{{(-1)}^{n+1}}{n}\) for simplicity, let an arbitrary number \(L \in \R\) be given, and set \(\sigma(1) := 1\text{.}\) Suppose we have defined \(\sigma(n)\) for all \(n \leq N\text{.}\) If

\begin{equation*} \sum_{n=1}^N a_{\sigma(n)} \leq L , \end{equation*}

then let \(\sigma(N+1) := k\) be the smallest odd \(k \in \N\) that we have not used yet, that is, \(\sigma(n) \not= k\) for all \(n \leq N\text{.}\) Otherwise, let \(\sigma(N+1) := k\) be the smallest even \(k\) that we have not yet used.

By construction \(\sigma \colon \N \to \N\) is one-to-one. It is also onto, because if we keep adding either odd (resp. even) terms, eventually we pass \(L\) and switch to the evens (resp. odds). So we switch infinitely many times.

Finally, let \(N\) be the \(N\) where we just pass \(L\) and switch. For example, suppose we have just switched from odd to even (so we start subtracting), and let \(N' > N\) be where we first switch back from even to odd. Then

\begin{equation*} L + \frac{1}{\sigma(N)} \geq \sum_{n=1}^{N-1} a_{\sigma(n)} > \sum_{n=1}^{N'-1} a_{\sigma(n)} > L- \frac{1}{\sigma(N')}. \end{equation*}

And similarly for switching in the other direction. Therefore, the sum up to \(N'-1\) is within \(\frac{1}{\min \{ \sigma(N), \sigma(N') \}}\) of \(L\text{.}\) As we switch infinitely many times we obtain that \(\sigma(N) \to \infty\) and \(\sigma(N') \to \infty\text{,}\) and hence

\begin{equation*} \sum_{n=1}^\infty a_{\sigma(n)} = \sum_{n=1}^\infty \frac{{(-1)}^{\sigma(n)+1}}{\sigma(n)} = L . \end{equation*}

Here is an example to illustrate the proof. Suppose \(L=1.2\text{,}\) then the order is

\begin{equation*} 1+\nicefrac{1}{3}-\nicefrac{1}{2}+\nicefrac{1}{5}+\nicefrac{1}{7}+\nicefrac{1}{9}-\nicefrac{1}{4}+\nicefrac{1}{11}+\nicefrac{1}{13}-\nicefrac{1}{6} +\nicefrac{1}{15}+\nicefrac{1}{17}+\nicefrac{1}{19} - \nicefrac{1}{8} + \cdots . \end{equation*}

At this point we are no more than \(\nicefrac{1}{8}\) from the limit.

Subsection 2.6.4 Multiplication of series

As we have already mentioned, multiplication of series is somewhat harder than addition. If at least one of the series converges absolutely, then we can use the following theorem. For this result, it is convenient to start the series at 0, rather than at 1.

The series \(\sum c_n\) is called the Cauchy product of \(\sum a_n\) and \(\sum b_n\text{.}\)

Proof.

Suppose \(\sum a_n\) converges absolutely, and let \(\epsilon > 0\) be given. In this proof instead of picking complicated estimates just to make the final estimate come out as less than \(\epsilon\text{,}\) let us simply obtain an estimate that depends on \(\epsilon\) and can be made arbitrarily small.

Write

\begin{equation*} A_m := \sum_{n=0}^m a_n , \qquad B_m := \sum_{n=0}^m b_n . \end{equation*}

We rearrange the \(m\)th partial sum of \(\sum c_n\text{:}\)

\begin{equation*} \begin{split} \abs{\left(\sum_{n=0}^m c_n \right) - AB} & = \abs{\left( \sum_{n=0}^m \sum_{j=0}^n a_j b_{n-j} \right) - AB} \\ & = \abs{\left( \sum_{n=0}^m B_n a_{m-n} \right) - AB} \\ & = \abs{\left( \sum_{n=0}^m ( B_n - B ) a_{m-n} \right) + B A_m - AB} \\ & \leq \left( \sum_{n=0}^m \abs{ B_n - B } \abs{a_{m-n}} \right) + \abs{B}\abs{A_m - A} \end{split} \end{equation*}

We can surely make the second term on the right-hand side go to zero. The trick is to handle the first term. Pick \(K\) such that for all \(m \geq K\text{,}\) we have \(\abs{A_m - A} < \epsilon\) and also \(\abs{B_m - B} < \epsilon\text{.}\) Finally, as \(\sum a_n\) converges absolutely, make sure that \(K\) is large enough such that for all \(m \geq K\text{,}\)

\begin{equation*} \sum_{n=K}^m \abs{a_n} < \epsilon . \end{equation*}

As \(\sum b_n\) converges, then we have that \(B_{\text{max}} := \sup \{ \abs{ B_n - B } : n = 0,1,2,\ldots \}\) is finite. Take \(m \geq 2K\text{,}\) then in particular \(m-K+1 > K\text{.}\) So

\begin{equation*} \begin{split} \sum_{n=0}^m \abs{ B_n - B } \abs{a_{m-n}} & = \left( \sum_{n=0}^{m-K} \abs{ B_n - B } \abs{a_{m-n}} \right) + \left( \sum_{n=m-K+1}^m \abs{ B_n - B } \abs{a_{m-n}} \right) \\ & \leq \left( \sum_{n=K}^m \abs{a_{n}} \right) B_{\text{max}} + \left( \sum_{n=0}^{K-1} \epsilon \abs{a_{n}} \right) \\ & \leq \epsilon B_{\text{max}} + \epsilon \left( \sum_{n=0}^\infty \abs{a_{n}} \right) . \end{split} \end{equation*}

Therefore, for \(m \geq 2K\text{,}\) we have

\begin{equation*} \begin{split} \abs{\left(\sum_{n=0}^m c_n \right) - AB} & \leq \left( \sum_{n=0}^m \abs{ B_n - B } \abs{a_{m-n}} \right) + \abs{B}\abs{A_m - A} \\ & \leq \epsilon B_{\text{max}} + \epsilon \left( \sum_{n=0}^\infty \abs{a_{n}} \right) + \abs{B}\epsilon = \epsilon \left( B_{\text{max}} + \left( \sum_{n=0}^\infty \abs{a_{n}} \right) + \abs{B} \right) . \end{split} \end{equation*}

The expression in the parenthesis on the right-hand side is a fixed number. Hence, we can make the right-hand side arbitrarily small by picking a small enough \(\epsilon> 0\text{.}\) So \(\sum_{n=0}^\infty c_n\) converges to \(AB\text{.}\)

Example 2.6.6.

If both series are only conditionally convergent, the Cauchy product series need not even converge. Suppose we take \(a_n = b_n = {(-1)}^n \frac{1}{\sqrt{n+1}}\text{.}\) The series \(\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty b_n\) converges by the alternating series test; however, it does not converge absolutely as can be seen from the \(p\)-test. Let us look at the Cauchy product.

\begin{equation*} c_n = {(-1)}^n \left( \frac{1}{\sqrt{n+1}} + \frac{1}{\sqrt{2n}} + \frac{1}{\sqrt{3(n-1)}} + \cdots + \frac{1}{\sqrt{n+1}} \right) = {(-1)}^n \sum_{j=0}^n \frac{1}{\sqrt{(j+1)(n-j+1)}} . \end{equation*}

Therefore,

\begin{equation*} \abs{c_n} = \sum_{j=0}^n \frac{1}{\sqrt{(j+1)(n-j+1)}} \geq \sum_{j=0}^n \frac{1}{\sqrt{(n+1)(n+1)}} = 1 . \end{equation*}

The terms do not go to zero and hence \(\sum c_n\) cannot converge.

Subsection 2.6.5 Power series

Fix \(x_0 \in \R\text{.}\) A power series about \(x_0\) is a series of the form

\begin{equation*} \sum_{n=0}^\infty a_n {(x-x_0)}^n . \end{equation*}

A power series is really a function of \(x\text{,}\) and many important functions in analysis can be written as a power series. We use the convention that \(0^0 = 1\) (if \(x=x_0\) and \(n=0\)).

We say that a power series is convergent if there is at least one \(x \not= x_0\) that makes the series converge. If \(x=x_0\text{,}\) then the series always converges since all terms except the first are zero. If the series does not converge for any point \(x \not= x_0\text{,}\) we say that the series is divergent.

Example 2.6.7.

The series

\begin{equation*} \sum_{n=0}^\infty \frac{1}{n!} x^n \end{equation*}

is absolutely convergent for all \(x \in \R\) using the ratio test: For any \(x \in \R\)

\begin{equation*} \lim_{n \to \infty} \frac{\bigl(1/(n+1)!\bigr) \, x^{n+1}}{(1/n!) \, x^{n}} = \lim_{n \to \infty} \frac{x}{n+1} = 0. \end{equation*}

Recall from calculus that this series converges to \(e^x\text{.}\)

Example 2.6.8.

The series

\begin{equation*} \sum_{n=1}^\infty \frac{1}{n} x^n \end{equation*}

converges absolutely for all \(x \in (-1,1)\) via the ratio test:

\begin{equation*} \lim_{n \to \infty} \abs{ \frac{\bigl(1/(n+1) \bigr) \, x^{n+1}}{(1/n) \, x^{n}} } = \lim_{n \to \infty} \abs{x} \frac{n}{n+1} = \abs{x} < 1 . \end{equation*}

The series converges at \(x=-1\text{,}\) as \(\sum_{n=1}^\infty \frac{{(-1)}^n}{n}\) converges by the alternating series test. But the power series does not converge absolutely at \(x=-1\text{,}\) because \(\sum_{n=1}^\infty \frac{1}{n}\) does not converge. The series diverges at \(x=1\text{.}\) When \(\abs{x} > 1\text{,}\) then the series diverges via the ratio test.

Example 2.6.9.

The series

\begin{equation*} \sum_{n=1}^\infty n^n x^n \end{equation*}

diverges for all \(x \not= 0\text{.}\) Let us apply the root test

\begin{equation*} \limsup_{n\to\infty} \, \abs{n^n x^n}^{1/n} = \limsup_{n\to\infty} \, n \abs{x} = \infty . \end{equation*}

Therefore, the series diverges for all \(x \not= 0\text{.}\)

Convergence of power series in general works analogously to one of the three examples above.

The number \(\rho\) is called the radius of convergence of the power series. We write \(\rho = \infty\) if the series converges for all \(x\text{,}\) and we write \(\rho = 0\) if the series is divergent. At the endpoints, that is if \(x = x_0+\rho\) or \(x = x_0-\rho\text{,}\) the proposition says nothing, and the series might or might not converge. See Figure 2.8. In Example 2.6.8 the radius of convergence is \(\rho=1\text{.}\) In Example 2.6.7 the radius of convergence is \(\rho=\infty\text{,}\) and in Example 2.6.9 the radius of convergence is \(\rho=0\text{.}\)


Figure 2.8. Convergence of a power series.

Proof.

Write

\begin{equation*} R := \limsup_{n\to\infty} \, {\abs{a_n}}^{1/n} . \end{equation*}

We use the root test to prove the proposition:

\begin{equation*} L = \limsup_{n\to\infty} \, {\abs{a_n {(x-x_0)}^n}}^{1/n} = \abs{x-x_0} \limsup_{n\to\infty} \, {\abs{a_n}}^{1/n} = \abs{x-x_0} R . \end{equation*}

In particular, if \(R = \infty\text{,}\) then \(L=\infty\) for every \(x \not= x_0\text{,}\) and the series diverges by the root test. On the other hand, if \(R = 0\text{,}\) then \(L=0\) for every \(x\text{,}\) and the series converges absolutely for all \(x\text{.}\)

Suppose \(0 < R < \infty\text{.}\) The series converges absolutely if \(1 > L = R \abs{x-x_0}\text{,}\) or in other words when

\begin{equation*} \abs{x-x_0} < \nicefrac{1}{R} . \end{equation*}

The series diverges when \(1 < L = R \abs{x-x_0}\text{,}\) or

\begin{equation*} \abs{x-x_0} > \nicefrac{1}{R} . \end{equation*}

Letting \(\rho := \nicefrac{1}{R}\) completes the proof.

It may be useful to restate what we have learned in the proof as a separate proposition.

Often, radius of convergence is written as \(\rho = \nicefrac{1}{R}\) in all three cases, with the understanding of what \(\rho\) should be if \(R = 0\) or \(R = \infty\text{.}\)

Convergent power series can be added and multiplied together, and multiplied by constants. The proposition has a straight forward proof using what we know about series in general, and power series in particular. We leave the proof to the reader.

That is, after performing the algebraic operations, the radius of convergence of the resulting series is at least \(\rho\text{.}\) For all \(x\) with \(\abs{x-x_0} < \rho\text{,}\) we have two convergent series so their term by term addition and multiplication by constants follows by what we learned in the last section. For multiplication of two power series, the series are absolutely convergent inside the radius of convergence and that is why for those \(x\) we can apply Mertens' theorem. Note that after applying an algebraic operation the radius of convergence could increase. See the exercises.

Let us look at some examples of power series. Polynomials are simply finite power series. That is, a polynomial is a power series where the \(a_n\) are zero for all \(n\) large enough. We expand a polynomial as a power series about any point \(x_0\) by writing the polynomial as a polynomial in \((x-x_0)\text{.}\) For example, \(2x^2-3x+4\) as a power series around \(x_0 = 1\) is

\begin{equation*} 2x^2-3x+4 = 3 + (x-1) + 2{(x-1)}^2 . \end{equation*}

We can also expand rational functions (that is, ratios of polynomials) as power series, although we will not completely prove this fact here. Notice that a series for a rational function only defines the function on an interval even if the function is defined elsewhere. For example, for the geometric series, we have that for \(x \in (-1,1)\)

\begin{equation*} \frac{1}{1-x} = \sum_{n=0}^\infty x^n . \end{equation*}

The series diverges when \(\abs{x} > 1\text{,}\) even though \(\frac{1}{1-x}\) is defined for all \(x \not= 1\text{.}\)

We can use the geometric series together with rules for addition and multiplication of power series to expand rational functions as power series around \(x_0\text{,}\) as long as the denominator is not zero at \(x_0\text{.}\) We state without proof that this is always possible, and we give an example of such a computation using the geometric series.

Example 2.6.13.

Let us expand \(\frac{x}{1+2x+x^2}\) as a power series around the origin (\(x_0 = 0\)) and find the radius of convergence.

Write \(1+2x+x^2 = {(1+x)}^2 = {\bigl(1-(-x)\bigr)}^2\text{,}\) and suppose \(\abs{x} < 1\text{.}\) Compute

\begin{equation*} \begin{split} \frac{x}{1+2x+x^2} &= x \, {\left( \frac{1}{1-(-x)} \right)}^2 \\ &= x \, {\left( \sum_{n=0}^\infty {(-1)}^n x^n \right)}^2 \\ &= x \, \left( \sum_{n=0}^\infty c_n x^n \right) \\ &= \sum_{n=0}^\infty c_n x^{n+1} . \end{split} \end{equation*}

Using the formula for the product of series, we obtain \(c_0 = 1\text{,}\) \(c_1 = -1 -1 = -2\text{,}\) \(c_2 = 1+1+1 = 3\text{,}\) etc. Hence, for \(\abs{x} < 1\text{,}\)

\begin{equation*} \frac{x}{1+2x+x^2} = \sum_{n=1}^\infty {(-1)}^{n+1} n x^n . \end{equation*}

The radius of convergence is at least 1. We leave it to the reader to verify that the radius of convergence is exactly equal to 1.

You can use the method of partial fractions you know from calculus. For example, to find the power series for \(\frac{x^3+x}{x^2-1}\) at 0, write

\begin{equation*} \frac{x^3+x}{x^2-1} = x + \frac{1}{1+x} - \frac{1}{1-x} = x + \sum_{n=0}^\infty {(-1)}^n x^n - \sum_{n=0}^\infty x^n . \end{equation*}

Subsection 2.6.6 Exercises

Exercise 2.6.1.

Decide the convergence or divergence of the following series.

a) \(\displaystyle \sum_{n=1}^\infty \frac{1}{2^{2n+1}}\)        b) \(\displaystyle \sum_{n=1}^\infty \frac{{(-1)}^{n}(n-1)}{n}\)        c) \(\displaystyle \sum_{n=1}^\infty \frac{{(-1)}^n}{n^{1/10}}\)        d) \(\displaystyle \sum_{n=1}^\infty \frac{n^n}{{(n+1)}^{2n}}\)

Exercise 2.6.2.

Suppose both \(\sum_{n=0}^\infty a_n\) and \(\sum_{n=0}^\infty b_n\) converge absolutely. Show that the product series, \(\sum_{n=0}^\infty c_n\) where \(c_n = a_0 b_n + a_1 b_{n-1} + \cdots + a_n b_0\text{,}\) also converges absolutely.

Exercise 2.6.3.

(Challenging)   Let \(\sum a_n\) be conditionally convergent. Show that given an arbitrary \(x \in \R\) there exists a rearrangement of \(\sum a_n\) such that the rearranged series converges to \(x\text{.}\) Hint: See Example 2.6.4.

Exercise 2.6.4.

  1. Show that the alternating harmonic series \(\sum \frac{{(-1)}^{n+1}}{n}\) has a rearrangement such that whenever \(x < y\text{,}\) there exists a partial sum \(s_n\) of the rearranged series such that \(x < s_n < y\text{.}\)

  2. Show that the rearrangement you found does not converge. See Example 2.6.4.

  3. Show that for every \(x \in \R\text{,}\) there exists a subsequence of partial sums \(\{ s_{n_k} \}\) of your rearrangement such that \(\lim \, s_{n_k} = x\text{.}\)

Exercise 2.6.5.

For the following power series, find if they are convergent or not, and if so find their radius of convergence.

a) \(\displaystyle \sum_{n=0}^\infty 2^n x^n\)        b) \(\displaystyle \sum_{n=0}^\infty n x^n\)        c) \(\displaystyle \sum_{n=0}^\infty n! \, x^n\)        d) \(\displaystyle \sum_{n=0}^\infty \frac{1}{(2n)!} {(x-10)}^n\)        e) \(\displaystyle \sum_{n=0}^\infty x^{2n}\)        f) \(\displaystyle \sum_{n=0}^\infty n! \, x^{n!}\)

Exercise 2.6.6.

Suppose \(\sum a_n x^n\) converges for \(x=1\text{.}\)

  1. What can you say about the radius of convergence?

  2. If you further know that at \(x=1\) the convergence is not absolute, what can you say?

Exercise 2.6.7.

Expand \(\dfrac{x}{4-x^2}\) as a power series around \(x_0 = 0\) and compute its radius of convergence.

Exercise 2.6.8.

  1. Find an example where the radii of convergence of \(\sum a_n x^n\) and \(\sum b_n x^n\) are both 1, but the radius of convergence of the sum of the two series is infinite.

  2. (Trickier) Find an example where the radii of convergence of \(\sum a_n x^n\) and \(\sum b_n x^n\) are both 1, but the radius of convergence of the product of the two series is infinite.

Exercise 2.6.9.

Figure out how to compute the radius of convergence using the ratio test. That is, suppose \(\sum a_n x^n\) is a power series and \(R := \lim \, \frac{\abs{a_{n+1}}}{\abs{a_n}}\) exists or is \(\infty\text{.}\) Find the radius of convergence and prove your claim.

Exercise 2.6.10.

  1. Prove that \(\lim \, n^{1/n} = 1\) using the following procedure: Write \(n^{1/n} = 1+b_n\) and note \(b_n > 0\text{.}\) Then show that \({(1+b_n)}^n \geq \frac{n(n-1)}{2}b_n^2\) and use this to show that \(\lim \, b_n = 0\text{.}\)

  2. Use the result of part a) to show that if \(\sum a_n x^n\) is a convergent power series with radius of convergence \(R\text{,}\) then \(\sum n a_n x^n\) is also convergent with the same radius of convergence.

There are different notions of summability (convergence) of a series than just the one we have seen. A common one is Cesàro summability 4 . Let \(\sum a_n\) be a series and let \(s_n\) be the \(n\)th partial sum. The series is said to be Cesàro summable to \(a\) if

\begin{equation*} a = \lim_{n\to \infty} \frac{s_1 + s_2 + \cdots + s_n}{n} . \end{equation*}

Exercise 2.6.11.

(Challenging)  

  1. If \(\sum a_n\) is convergent to \(a\) (in the usual sense), show that \(\sum a_n\) is Cesàro summable (see above) to \(a\text{.}\)

  2. Show that in the sense of Cesàro \(\sum {(-1)}^n\) is summable to \(\nicefrac{1}{2}\text{.}\)

  3. Let \(a_n := k\) when \(n = k^3\) for some \(k \in \N\text{,}\) \(a_n := -k\) when \(n = k^3+1\) for some \(k \in \N\text{,}\) otherwise let \(a_n := 0\text{.}\) Show that \(\sum a_n\) diverges in the usual sense, (partial sums are unbounded), but it is Cesàro summable to 0 (seems a little paradoxical at first sight).

Exercise 2.6.12.

(Challenging)   Show that the monotonicity in the alternating series test is necessary. That is, find a sequence of positive real numbers \(\{ x_n \}\) with \(\lim\, x_n = 0\) but such that \(\sum {(-1)}^n x_n\) diverges.

Exercise 2.6.13.

Find a series such that \(\sum x_n\) converges but \(\sum x_n^2\) diverges. Hint: Compare Exercise 2.5.14.

Exercise 2.6.14.

Suppose \(\{ c_n \}\) is a sequence. Prove that for every \(r \in (0,1)\text{,}\) there exists a strictly increasing sequence \(\{ n_k \}\) of natural numbers (\(n_{k+1} > n_k\)) such that

\begin{equation*} \sum_{k=1}^\infty c_k x^{n_k} \end{equation*}

converges absolutely for all \(x \in [-r,r]\text{.}\)

In case \(L=\infty\text{,}\) see Exercise 2.3.20. Alternatively, note that if \(L=\infty\text{,}\) then \(\bigl\{ {\abs{x_{n}}}^{1/n} \bigr\}\) and thus \(\{ x_n \}\) is unbounded.
Proved by the German mathematician Franz Mertens 3  (1840–1927).
https://en.wikipedia.org/wiki/Franz_Mertens
Named for the Italian mathematician Ernesto Cesàro 5  (1859–1906).
https://en.wikipedia.org/wiki/Ernesto_Ces\%C3\%A0ro
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf