Skip to main content
Logo image

Section 2.6 More on series

Note: up to 2–3 lectures (optional, can safely be skipped or covered partially)

Subsection 2.6.1 Root test

A test similar to the ratio test is the so-called root test. The proof of this test is similar. Again, the idea is to generalize what happens for the geometric series.

Proof.

If \(L > 1\text{,}\) then there exists 1  a subsequence \(\{ x_{n_k} \}_{k=1}^\infty\) such that \(L = \lim_{k\to\infty} {\sabs{x_{n_k}}}^{1/n_k}\text{.}\) Let \(r\) be such that \(L > r > 1\text{.}\) There exists an \(M\) such that for all \(k \geq M\text{,}\) we have \({\sabs{x_{n_k}}}^{1/n_k} > r > 1\text{,}\) or in other words \(\sabs{x_{n_k}} > r^{n_k} > 1\text{.}\) The subsequence \(\{ \sabs{x_{n_k}} \}_{k=1}^\infty\text{,}\) and therefore also \(\{ \sabs{x_{n}} \}_{n=1}^\infty\text{,}\) cannot possibly converge to zero, and so the series diverges.
Now suppose \(L < 1\text{.}\) Pick \(r\) such that \(L < r < 1\text{.}\) By definition of limit supremum, there is an \(M\) such that for all \(n \geq M\text{,}\)
\begin{equation*} \sup \bigl\{ {\abs{x_k}}^{1/k} : k \geq n \bigr\} < r . \end{equation*}
Therefore, for all \(n \geq M\text{,}\)
\begin{equation*} {\abs{x_n}}^{1/n} < r , \qquad \text{or in other words} \qquad \abs{x_n} < r^n . \end{equation*}
Let \(k > M\text{,}\) and estimate the \(k\)th partial sum:
\begin{equation*} \sum_{n=1}^k \abs{x_n} = \left( \sum_{n=1}^M \abs{x_n} \right) + \left( \sum_{n=M+1}^k \abs{x_n} \right) \leq \left( \sum_{n=1}^M \abs{x_n} \right) + \left( \sum_{n=M+1}^k r^n \right) . \end{equation*}
As \(0 < r < 1\text{,}\) the geometric series \(\sum_{n=M+1}^\infty r^n\) converges to \(\frac{r^{M+1}}{1-r}\text{.}\) As everything is positive,
\begin{equation*} \sum_{n=1}^k \abs{x_n} \leq \left( \sum_{n=1}^M \abs{x_n} \right) + \frac{r^{M+1}}{1-r} . \end{equation*}
Thus the sequence of partial sums of \(\sum_{n=1}^\infty \abs{x_n}\) is bounded, and the series converges. Therefore, \(\sum_{n=1}^\infty x_n\) converges absolutely.

Subsection 2.6.2 Alternating series test

The tests we have seen so far only addressed absolute convergence. The following test gives a large supply of conditionally convergent series.

Proof.

Let \(s_m \coloneqq \sum_{k=1}^m {(-1)}^n x_n\) be the \(m\)th partial sum. Then
\begin{equation*} s_{2k} = \sum_{n=1}^{2k} {(-1)}^n x_n = (-x_1 + x_2) + \cdots + (-x_{2k-1} + x_{2k}) = \sum_{\ell=1}^{k} (-x_{2\ell-1} + x_{2\ell}) . \end{equation*}
The sequence \(\{ x_n \}_{n=1}^\infty\) is decreasing, so \((-x_{2\ell-1}+x_{2\ell}) \leq 0\) for all \(\ell\text{.}\) Thus, the subsequence \(\{ s_{2k} \}_{k=1}^\infty\) of partial sums is a decreasing sequence. Similarly, \((x_{2\ell}-x_{2\ell+1}) \geq 0\text{,}\) and so
\begin{equation*} s_{2k} = - x_1 + ( x_2 - x_3 ) + \cdots + ( x_{2k-2} - x_{2k-1} ) + x_{2k} \geq -x_1 . \end{equation*}
The intuition behind the bound \(0 \geq s_{2k} \geq -x_1\) is illustrated in Figure 2.8.

Figure 2.8. Showing that \(0 \geq s_{2k} \geq -x_1\) where \(k=4\) for an alternating series.

As \(\{ s_{2k} \}_{k=1}^\infty\) is decreasing and bounded below, it converges. Let \(a \coloneqq \lim_{k\to\infty} s_{2k}\text{.}\) We wish to show that \(\lim_{m\to\infty} s_m = a\) (and not just for the subsequence). Given \(\epsilon > 0\text{,}\) pick \(M\) such that \(\abs{s_{2k}-a} < \nicefrac{\epsilon}{2}\) whenever \(k \geq M\text{.}\) Since \(\lim_{n\to\infty} x_n = 0\text{,}\) we also make \(M\) possibly larger to obtain \(x_{2k+1} < \nicefrac{\epsilon}{2}\) whenever \(k \geq M\text{.}\) Suppose \(m \geq 2M+1\text{.}\) If \(m=2k\text{,}\) then \(k \geq M+\nicefrac{1}{2} \geq M\) and \(\abs{s_{m}-a}=\abs{s_{2k}-a} < \nicefrac{\epsilon}{2} < \epsilon\text{.}\) If \(m=2k+1\text{,}\) then also \(k \geq M\text{.}\) Notice \(s_{2k+1} = s_{2k} - x_{2k+1}\text{.}\) Thus
\begin{equation*} \abs{s_{m}-a} = \abs{s_{2k+1}-a} = \abs{s_{2k}-a - x_{2k+1}} \leq \abs{s_{2k}-a} + x_{2k+1} < \nicefrac{\epsilon}{2}+ \nicefrac{\epsilon}{2} = \epsilon . \qedhere \end{equation*}
Notably, there exist conditionally convergent series where the absolute values of the terms go to zero arbitrarily slowly. The series
\begin{equation*} \sum_{n=1}^\infty \frac{{(-1)}^n}{n^p} \end{equation*}
converges for arbitrarily small \(p > 0\text{,}\) but it does not converge absolutely when \(p \leq 1\text{.}\)

Subsection 2.6.3 Rearrangements

Absolutely convergent series behave as we imagine they should. For example, absolutely convergent series can be summed in any order whatsoever. Nothing of the sort holds for conditionally convergent series (see Example 2.6.4 and Exercise 2.6.3).
Consider a series
\begin{equation*} \sum_{n=1}^\infty x_n . \end{equation*}
Given a bijective function \(\sigma \colon \N \to \N\text{,}\) the corresponding rearrangement is the series:
\begin{equation*} \sum_{k=1}^\infty x_{\sigma(k)} . \end{equation*}
We simply sum the series in a different order.
In other words, a rearrangement of an absolutely convergent series converges (absolutely) to the same number.

Proof.

Let \(\epsilon > 0\) be given. As \(\sum_{n=1}^\infty x_n\) is absolutely convergent, take \(M\) such that
\begin{equation*} \abs{\left(\sum_{n=1}^M x_n \right) - x} < \frac{\epsilon}{2} \qquad \text{and} \qquad \sum_{n=M+1}^\infty \abs{x_n} < \frac{\epsilon}{2} . \end{equation*}
As \(\sigma\) is a bijection, there exists a number \(K\) such that for each \(n \leq M\text{,}\) there exists \(k \leq K\) such that \(\sigma(k) = n\text{.}\) In other words \(\{ 1,2,\ldots,M \} \subset \sigma\bigl(\{ 1,2,\ldots,K \} \bigr)\text{.}\)
For \(N \geq K\text{,}\) let \(Q \coloneqq \max \sigma\bigl(\{ 1,2,\ldots,N \}\bigr)\text{.}\) Compute
\begin{equation*} \begin{split} \abs{\left( \sum_{n=1}^N x_{\sigma(n)} \right) - x} & = \abs{ \left( \sum_{n=1}^M x_n + \sum_{\substack{n=1\\\sigma(n) > M}}^N x_{\sigma(n)} \right) - x} \\ & \leq \abs{ \left( \sum_{n=1}^M x_n \right) - x} + \sum_{\substack{n=1\\\sigma(n) > M}}^N \abs{x_{\sigma(n)}} \\ & \leq \abs{ \left( \sum_{n=1}^M x_n \right) - x} + \sum_{n=M+1}^Q \abs{x_{n}} \\ & < \nicefrac{\epsilon}{2} + \nicefrac{\epsilon}{2} = \epsilon . \end{split} \end{equation*}
So \(\sum_{n=1}^\infty x_{\sigma(n)}\) converges to \(x\text{.}\) To see that the convergence is absolute, we apply the argument above to \(\sum_{n=1}^\infty \abs{x_n}\) to show that \(\sum_{n=1}^\infty \abs{x_{\sigma(n)}}\) converges.

Example 2.6.4.

Let us show that the alternating harmonic series \(\sum_{n=1}^\infty \frac{{(-1)}^{n+1}}{n}\text{,}\) which does not converge absolutely, can be rearranged to converge to anything. The odd terms and the even terms diverge to plus infinity and minus infinity respectively (prove this!):
\begin{equation*} \sum_{m=1}^\infty \frac{1}{2m-1} = \infty, \qquad \text{and} \qquad \sum_{m=1}^\infty \frac{-1}{2m} = -\infty . \end{equation*}
Let \(a_n \coloneqq \frac{{(-1)}^{n+1}}{n}\) for simplicity, let an arbitrary number \(L \in \R\) be given, and set \(\sigma(1) \coloneqq 1\text{.}\) Suppose we have defined \(\sigma(n)\) for all \(n \leq N\text{.}\) If
\begin{equation*} \sum_{n=1}^N a_{\sigma(n)} \leq L , \end{equation*}
then let \(\sigma(N+1) \coloneqq k\) be the smallest odd \(k \in \N\) that we have not used yet, that is, \(\sigma(n) \not= k\) for all \(n \leq N\text{.}\) Otherwise, let \(\sigma(N+1) \coloneqq k\) be the smallest even \(k\) that we have not yet used.
By construction, \(\sigma \colon \N \to \N\) is one-to-one. It is also onto, because if we keep adding either odd (resp. even) terms, eventually we pass \(L\) and switch to the evens (resp. odds). So we switch infinitely many times.
Finally, let \(N\) be the \(N\) where we just pass \(L\) and switch. For example, suppose we have just switched from odd to even (so we start subtracting), and let \(N' > N\) be where we first switch back from even to odd. Then
\begin{equation*} L + \frac{1}{\sigma(N)} \geq \sum_{n=1}^{N-1} a_{\sigma(n)} > \sum_{n=1}^{N'-1} a_{\sigma(n)} > L- \frac{1}{\sigma(N')}. \end{equation*}
Similarly for switching in the other direction. Therefore, the sum up to \(N'-1\) is within \(\frac{1}{\min \{ \sigma(N), \sigma(N') \}}\) of \(L\text{.}\) As we switch infinitely many times, \(\sigma(N) \to \infty\) and \(\sigma(N') \to \infty\text{.}\) Hence
\begin{equation*} \sum_{n=1}^\infty a_{\sigma(n)} = \sum_{n=1}^\infty \frac{{(-1)}^{\sigma(n)+1}}{\sigma(n)} = L . \end{equation*}
Here is an example to illustrate the proof. Suppose \(L=1.2\text{,}\) then the order is
\begin{equation*} 1+\nicefrac{1}{3}-\nicefrac{1}{2}+\nicefrac{1}{5}+\nicefrac{1}{7}+\nicefrac{1}{9}-\nicefrac{1}{4}+\nicefrac{1}{11}+\nicefrac{1}{13}-\nicefrac{1}{6} +\nicefrac{1}{15}+\nicefrac{1}{17}+\nicefrac{1}{19} - \nicefrac{1}{8} + \cdots . \end{equation*}
At this point we are no more than \(\nicefrac{1}{8}\) from the limit. See Figure 2.9.

Figure 2.9. The first 14 partial sums of the rearrangement converging to \(1.2\text{.}\)

Subsection 2.6.4 Multiplication of series

As we have already mentioned, multiplication of series is somewhat harder than addition. If at least one of the series converges absolutely, then we can use the following theorem. For this result, it is convenient to start the series at 0, rather than at 1.
The series \(\sum_{n=0}^\infty c_n\) is called the Cauchy product of \(\sum_{n=0}^\infty a_n\) and \(\sum_{n=0}^\infty b_n\text{.}\)

Proof.

Suppose \(\sum_{n=0}^\infty a_n\) converges absolutely, and let \(\epsilon > 0\) be given. In this proof instead of picking complicated estimates just to make the final estimate come out as less than \(\epsilon\text{,}\) let us simply obtain an estimate that depends on \(\epsilon\) and can be made arbitrarily small.
Write
\begin{equation*} A_m \coloneqq \sum_{n=0}^m a_n , \qquad B_m \coloneqq \sum_{n=0}^m b_n . \end{equation*}
We rearrange the \(m\)th partial sum of \(\sum_{n=0}^\infty c_n\text{:}\)
\begin{equation*} \begin{split} \abs{\left(\sum_{n=0}^m c_n \right) - AB} & = \abs{\left( \sum_{n=0}^m \sum_{i=0}^n a_i b_{n-i} \right) - AB} \\ & = \abs{\left( \sum_{n=0}^m B_n a_{m-n} \right) - AB} \\ & = \abs{\left( \sum_{n=0}^m ( B_n - B ) a_{m-n} \right) + B A_m - AB} \\ & \leq \left( \sum_{n=0}^m \abs{ B_n - B } \abs{a_{m-n}} \right) + \abs{B}\abs{A_m - A} \end{split} \end{equation*}
We can surely make the second term on the right-hand side go to zero. The trick is to handle the first term. Pick \(K\) such that for all \(m \geq K\text{,}\) we have \(\abs{A_m - A} < \epsilon\) and also \(\abs{B_m - B} < \epsilon\text{.}\) Finally, as \(\sum_{n=0}^\infty a_n\) converges absolutely, make sure that \(K\) is large enough such that for all \(m \geq K\text{,}\)
\begin{equation*} \sum_{n=K}^m \abs{a_n} < \epsilon . \end{equation*}
As \(\sum_{n=0}^\infty b_n\) converges, then \(B_{\text{max}} \coloneqq \sup \bigl\{ \abs{ B_n - B } : n = 0,1,2,\ldots \bigr\}\) is finite. Take \(m \geq 2K\text{.}\) In particular \(m-K+1 > K\text{.}\) So
\begin{equation*} \begin{split} \sum_{n=0}^m \abs{ B_n - B } \abs{a_{m-n}} & = \left( \sum_{n=0}^{m-K} \abs{ B_n - B } \abs{a_{m-n}} \right) + \left( \sum_{n=m-K+1}^m \abs{ B_n - B } \abs{a_{m-n}} \right) \\ & \leq \left( \sum_{n=K}^m \abs{a_{n}} \right) B_{\text{max}} + \left( \sum_{n=0}^{K-1} \epsilon \abs{a_{n}} \right) \\ & \leq \epsilon B_{\text{max}} + \epsilon \left( \sum_{n=0}^\infty \abs{a_{n}} \right) . \end{split} \end{equation*}
Therefore, for \(m \geq 2K\text{,}\) we have
\begin{equation*} \begin{split} \abs{\left(\sum_{n=0}^m c_n \right) - AB} & \leq \left( \sum_{n=0}^m \abs{ B_n - B } \abs{a_{m-n}} \right) + \abs{B}\abs{A_m - A} \\ & \leq \epsilon B_{\text{max}} + \epsilon \left( \sum_{n=0}^\infty \abs{a_{n}} \right) + \abs{B}\epsilon = \epsilon \left( B_{\text{max}} + \left( \sum_{n=0}^\infty \abs{a_{n}} \right) + \abs{B} \right) . \end{split} \end{equation*}
The expression in the parenthesis on the right-hand side is a fixed number. Hence, we can make the right-hand side arbitrarily small by picking a small enough \(\epsilon> 0\text{.}\) So \(\sum_{n=0}^\infty c_n\) converges to \(AB\text{.}\)

Example 2.6.6.

If both series are only conditionally convergent, the Cauchy product series need not even converge. Suppose we take \(a_n = b_n = {(-1)}^n \frac{1}{\sqrt{n+1}}\text{.}\) The series \(\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty b_n\) converges by the alternating series test; however, it does not converge absolutely as can be seen from the \(p\)-test. Let us look at the Cauchy product.
\begin{equation*} c_n = {(-1)}^n \left( \frac{1}{\sqrt{n+1}} + \frac{1}{\sqrt{2n}} + \frac{1}{\sqrt{3(n-1)}} + \cdots + \frac{1}{\sqrt{n+1}} \right) = {(-1)}^n \sum_{i=0}^n \frac{1}{\sqrt{(i+1)(n-i+1)}} . \end{equation*}
Therefore,
\begin{equation*} \abs{c_n} = \sum_{i=0}^n \frac{1}{\sqrt{(i+1)(n-i+1)}} \geq \sum_{i=0}^n \frac{1}{\sqrt{(n+1)(n+1)}} = 1 . \end{equation*}
The terms do not go to zero and hence \(\sum_{n=0}^\infty c_n\) cannot converge.

Subsection 2.6.5 Power series

Fix \(x_0 \in \R\text{.}\) A power series about \(x_0\) is a series of the form
\begin{equation*} \sum_{n=0}^\infty a_n {(x-x_0)}^n . \end{equation*}
A power series is really a function of \(x\text{,}\) and many important functions in analysis can be written as a power series. We use the convention that \(0^0 = 1\) (that is, if \(x=x_0\) and \(n=0\)).
A power series is said to be convergent if there is at least one \(x \not= x_0\) that makes the series converge. If \(x=x_0\text{,}\) then the series always converges since all terms except the first are zero. If the series does not converge for any point \(x \not= x_0\text{,}\) we say that the series is divergent.

Example 2.6.7.

The series
\begin{equation*} \sum_{n=0}^\infty \frac{1}{n!} x^n \end{equation*}
is absolutely convergent for all \(x \in \R\) using the ratio test: For any \(x \in \R\)
\begin{equation*} \lim_{n \to \infty} \frac{\bigl(1/(n+1)!\bigr) \, x^{n+1}}{(1/n!) \, x^{n}} = \lim_{n \to \infty} \frac{x}{n+1} = 0. \end{equation*}
Recall from calculus that this series converges to \(e^x\text{.}\)

Example 2.6.8.

The series
\begin{equation*} \sum_{n=1}^\infty \frac{1}{n} x^n \end{equation*}
converges absolutely for all \(x \in (-1,1)\) via the ratio test:
\begin{equation*} \lim_{n \to \infty} \abs{ \frac{\bigl(1/(n+1) \bigr) \, x^{n+1}}{(1/n) \, x^{n}} } = \lim_{n \to \infty} \abs{x} \frac{n}{n+1} = \abs{x} < 1 . \end{equation*}
The series converges at \(x=-1\text{,}\) as \(\sum_{n=1}^\infty \frac{{(-1)}^n}{n}\) converges by the alternating series test. But the power series does not converge absolutely at \(x=-1\text{,}\) because \(\sum_{n=1}^\infty \frac{1}{n}\) does not converge. The series diverges at \(x=1\text{.}\) When \(\abs{x} > 1\text{,}\) then the series diverges via the ratio test.

Example 2.6.9.

The series
\begin{equation*} \sum_{n=1}^\infty n^n x^n \end{equation*}
diverges for all \(x \not= 0\text{.}\) Let us apply the root test
\begin{equation*} \limsup_{n\to\infty} \, \abs{n^n x^n}^{1/n} = \limsup_{n\to\infty} \, n \abs{x} = \infty . \end{equation*}
Therefore, the series diverges for all \(x \not= 0\text{.}\)
Convergence of power series in general works analogously to the three examples above.
The number \(\rho\) is called the radius of convergence of the power series. We write \(\rho = \infty\) if the series converges for all \(x\text{,}\) and we write \(\rho = 0\) if the series is divergent. At the endpoints, that is, if \(x = x_0+\rho\) or \(x = x_0-\rho\text{,}\) the proposition says nothing, and the series might or might not converge. See Figure 2.10. In Example 2.6.8 the radius of convergence is \(\rho=1\text{,}\) in Example 2.6.7, the radius of convergence is \(\rho=\infty\text{,}\) and in Example 2.6.9, the radius of convergence is \(\rho=0\text{.}\)

Figure 2.10. Convergence of a power series.

Proof.

Write
\begin{equation*} R \coloneqq \limsup_{n\to\infty} {\abs{a_n}}^{1/n} . \end{equation*}
We apply the root test,
\begin{equation*} L = \limsup_{n\to\infty} {\abs{a_n {(x-x_0)}^n}}^{1/n} = \abs{x-x_0} \limsup_{n\to\infty} {\abs{a_n}}^{1/n} = \abs{x-x_0} R . \end{equation*}
If \(R = \infty\text{,}\) then \(L=\infty\) for every \(x \not= x_0\text{,}\) and the series diverges by the root test. On the other hand, if \(R = 0\text{,}\) then \(L=0\) for every \(x\text{,}\) and the series converges absolutely for all \(x\text{.}\)
Suppose \(0 < R < \infty\text{.}\) The series converges absolutely if \(1 > L = R \abs{x-x_0}\text{,}\) that is,
\begin{equation*} \abs{x-x_0} < \nicefrac{1}{R} . \end{equation*}
The series diverges when \(1 < L = R \abs{x-x_0}\text{,}\) or
\begin{equation*} \abs{x-x_0} > \nicefrac{1}{R} . \end{equation*}
Letting \(\rho \coloneqq \nicefrac{1}{R}\) completes the proof.
It may be useful to restate what we have learned in the proof as a separate proposition.
Often, radius of convergence is written as \(\rho = \nicefrac{1}{R}\) in all three cases, with the understanding of what \(\rho\) should be if \(R = 0\) or \(R = \infty\text{.}\)
Convergent power series can be added and multiplied together, and multiplied by constants. The proposition has a straight forward proof using what we know about series in general, and power series in particular. We leave the proof to the reader.
For all \(x\) with \(\abs{x-x_0} < \rho\text{,}\) we have two convergent series so their term-by-term addition and multiplication by constants follows by the previous section. As for such \(x\) the series converges absolutely, we can apply Merten’s theorem to find the product of two series. Consequently, after performing the algebraic operations, the radius of convergence of the resulting series is at least \(\rho\text{.}\) The radius of convergence of the result could be strictly larger than the radius of convergence of either of the series we started with. See the exercises.
Let us look at some examples of power series. Polynomials are simply finite power series: A polynomial is a power series where the \(a_n\) are zero for all \(n\) large enough. We expand a polynomial as a power series about any point \(x_0\) by writing the polynomial as a polynomial in \((x-x_0)\text{.}\) For example, \(2x^2-3x+4\) as a power series around \(x_0 = 1\) is
\begin{equation*} 2x^2-3x+4 = 3 + (x-1) + 2{(x-1)}^2 . \end{equation*}
We can also expand rational functions (that is, ratios of polynomials) as power series, although we will not completely prove this fact here. Notice that a series for a rational function only defines the function on an interval even if the function is defined elsewhere. For example, for the geometric series, we have that for \(x \in (-1,1)\text{,}\)
\begin{equation*} \frac{1}{1-x} = \sum_{n=0}^\infty x^n . \end{equation*}
The series diverges when \(\abs{x} > 1\text{,}\) even though \(\frac{1}{1-x}\) is defined for all \(x \not= 1\text{.}\)
We can use the geometric series together with rules for addition and multiplication of power series to expand rational functions as power series around \(x_0\text{,}\) as long as the denominator is not zero at \(x_0\text{.}\) We state without proof that this is always possible, and we give an example of such a computation using the geometric series.

Example 2.6.13.

Let us expand \(\frac{x}{1+2x+x^2}\) as a power series around the origin (\(x_0 = 0\)) and find the radius of convergence.
Write \(1+2x+x^2 = {(1+x)}^2 = {\bigl(1-(-x)\bigr)}^2\text{,}\) and suppose \(\abs{x} < 1\text{.}\) Compute
\begin{equation*} \begin{split} \frac{x}{1+2x+x^2} &= x \, {\left( \frac{1}{1-(-x)} \right)}^2 \\ &= x \, {\left( \sum_{n=0}^\infty {(-1)}^n x^n \right)}^2 \\ &= x \, \left( \sum_{n=0}^\infty c_n x^n \right) \\ &= \sum_{n=0}^\infty c_n x^{n+1} . \end{split} \end{equation*}
Using the formula for the product of series, we obtain \(c_0 = 1\text{,}\) \(c_1 = -1 -1 = -2\text{,}\) \(c_2 = 1+1+1 = 3\text{,}\) etc. Hence, for \(\abs{x} < 1\text{,}\)
\begin{equation*} \frac{x}{1+2x+x^2} = \sum_{n=1}^\infty {(-1)}^{n+1} n x^n . \end{equation*}
The radius of convergence is at least 1. We leave it to the reader to verify that the radius of convergence is exactly equal to 1.
You can use the method of partial fractions you know from calculus. For example, to find the power series for \(\frac{x^3+x}{x^2-1}\) at 0, write
\begin{equation*} \frac{x^3+x}{x^2-1} = x + \frac{1}{1+x} - \frac{1}{1-x} = x + \sum_{n=0}^\infty {(-1)}^n x^n - \sum_{n=0}^\infty x^n . \end{equation*}

Subsection 2.6.6 Exercises

Exercise 2.6.1.

Decide the convergence or divergence of the following series.
a) \(\displaystyle \sum_{n=1}^\infty \frac{1}{2^{2n+1}}\)        b) \(\displaystyle \sum_{n=1}^\infty \frac{{(-1)}^{n}(n-1)}{n}\)        c) \(\displaystyle \sum_{n=1}^\infty \frac{{(-1)}^n}{n^{1/10}}\)        d) \(\displaystyle \sum_{n=1}^\infty \frac{n^n}{{(n+1)}^{2n}}\)

Exercise 2.6.2.

Suppose both \(\sum_{n=0}^\infty a_n\) and \(\sum_{n=0}^\infty b_n\) converge absolutely. Show that the product series, \(\sum_{n=0}^\infty c_n\) where \(c_n = a_0 b_n + a_1 b_{n-1} + \cdots + a_n b_0\text{,}\) also converges absolutely.

Exercise 2.6.3.

(Challenging)   Let \(\sum_{n=1}^\infty a_n\) be conditionally convergent. Show that given an arbitrary \(x \in \R\) there exists a rearrangement of \(\sum_{n=1}^\infty a_n\) such that the rearranged series converges to \(x\text{.}\) Hint: See Example 2.6.4.

Exercise 2.6.4.

  1. Show that the alternating harmonic series \(\sum_{n=1}^\infty \frac{{(-1)}^{n+1}}{n}\) has a rearrangement such that for every interval \((x,y)\text{,}\) there exists a partial sum \(s_n\) of the rearranged series such that \(s_n \in (x,y)\text{.}\)
  2. Show that the rearrangement you found does not converge. See Example 2.6.4.
  3. Show that for every \(x \in \R\text{,}\) there exists a subsequence of partial sums \(\{ s_{n_k} \}_{k=1}^\infty\) of your rearrangement such that \(\lim\limits_{k\to\infty} s_{n_k} = x\text{.}\)

Exercise 2.6.5.

For the following power series, find if they are convergent or not, and if so find their radius of convergence.
a) \(\displaystyle \sum_{n=0}^\infty 2^n x^n\)        b) \(\displaystyle \sum_{n=0}^\infty n x^n\)        c) \(\displaystyle \sum_{n=0}^\infty n! \, x^n\)        d) \(\displaystyle \sum_{n=0}^\infty \frac{1}{(2n)!} {(x-10)}^n\)        e) \(\displaystyle \sum_{n=0}^\infty x^{2n}\)        f) \(\displaystyle \sum_{n=0}^\infty n! \, x^{n!}\)

Exercise 2.6.6.

Suppose \(\sum_{n=0}^\infty a_n x^n\) converges for \(x=1\text{.}\)
  1. What can you say about the radius of convergence?
  2. If you further know that at \(x=1\) the convergence is not absolute, what can you say?

Exercise 2.6.7.

Expand \(\dfrac{x}{4-x^2}\) as a power series around \(x_0 = 0\text{,}\) and compute its radius of convergence.

Exercise 2.6.8.

  1. Find an example where the radii of convergence of \(\sum_{n=0}^\infty a_n x^n\) and \(\sum_{n=0}^\infty b_n x^n\) are both 1, but the radius of convergence of the sum of the two series is infinite.
  2. (Trickier) Find an example where the radii of convergence of \(\sum_{n=0}^\infty a_n x^n\) and \(\sum_{n=0}^\infty b_n x^n\) are both 1, but the radius of convergence of the product of the two series is infinite.

Exercise 2.6.9.

Figure out how to compute the radius of convergence using the ratio test. That is, suppose \(\sum_{n=0}^\infty a_n x^n\) is a power series and \(R \coloneqq \lim_{n\to\infty} \frac{\abs{a_{n+1}}}{\abs{a_n}}\) exists or is \(\infty\text{.}\) Find the radius of convergence and prove your claim.

Exercise 2.6.10.

  1. Prove that \(\lim_{n\to\infty} n^{1/n} = 1\) using the following procedure: Write \(n^{1/n} = 1+b_n\) and note \(b_n > 0\text{.}\) Then show that \({(1+b_n)}^n \geq \frac{n(n-1)}{2}b_n^2\) and use this to show that \(\lim\limits_{n\to\infty} b_n = 0\text{.}\)
  2. Use the result of part a) to show that if \(\sum_{n=0}^\infty a_n x^n\) is a convergent power series with radius of convergence \(R\text{,}\) then \(\sum_{n=0}^\infty n a_n x^n\) is also convergent with the same radius of convergence.
There are different notions of summability (convergence) of a series than just the one we have seen. A common one is Cesàro summability 3 . Let \(\sum_{n=1}^\infty a_n\) be a series and let \(s_n\) be the \(n\)th partial sum. The series is said to be Cesàro summable to \(a\) if
\begin{equation*} a = \lim_{n\to \infty} \frac{s_1 + s_2 + \cdots + s_n}{n} . \end{equation*}

Exercise 2.6.11.

(Challenging)  
  1. If \(\sum_{n=1}^\infty a_n\) is convergent to \(a\) (in the usual sense), show that \(\sum_{n=1}^\infty a_n\) is Cesàro summable (see above) to \(a\text{.}\)
  2. Show that in the sense of Cesàro \(\sum_{n=1}^\infty {(-1)}^n\) is summable to \(\nicefrac{1}{2}\text{.}\)
  3. Let \(a_n \coloneqq k\) when \(n = k^3\) for some \(k \in \N\text{,}\) \(a_n \coloneqq -k\) when \(n = k^3+1\) for some \(k \in \N\text{,}\) otherwise let \(a_n \coloneqq 0\text{.}\) Show that \(\sum_{n=1}^\infty a_n\) diverges in the usual sense (in fact, both the sequence of terms and the partial sums are unbounded), but it is Cesàro summable to 0 (seems a little paradoxical at first sight).

Exercise 2.6.12.

(Challenging)   Show that the monotonicity in the alternating series test is necessary. That is, find a sequence of positive real numbers \(\{ x_n \}_{n=1}^\infty\) with \(\lim_{n\to\infty} x_n = 0\) but such that \(\sum_{n=1}^\infty {(-1)}^n x_n\) diverges.

Exercise 2.6.13.

Find a series \(\sum_{n=1}^\infty x_n\) that converges, but \(\sum_{n=1}^\infty x_n^2\) diverges. Hint: Compare Exercise 2.5.14.

Exercise 2.6.14.

Suppose \(\{ c_n \}_{n=1}^\infty\) is a sequence. Prove that for every \(r \in (0,1)\text{,}\) there exists a strictly increasing sequence \(\{ n_k \}_{k=1}^\infty\) of natural numbers (\(n_{k+1} > n_k\)) such that
\begin{equation*} \sum_{k=1}^\infty c_k x^{n_k} \end{equation*}
converges absolutely for all \(x \in [-r,r]\text{.}\)

Exercise 2.6.15.

(Tonelli/Fubini for sums, challenging)   Suppose let \(\{ x_{k,\ell} \}_{k=1,\ell=1}^\infty\) denote a doubly indexed sequence and let \(\sigma \colon \N \to \N^2\) be a bijection. Consider the series
\begin{equation*} \text{i)}~\sum_{i=1}^\infty x_{\sigma(i)}, \qquad \text{ii)}~\sum_{k=1}^\infty \left( \sum_{\ell=1}^\infty x_{k,\ell} \right), \qquad \text{iii)}~\sum_{\ell=1}^\infty \left( \sum_{k=1}^\infty x_{k,\ell} \right) . \end{equation*}
The expressions ii) and iii) are series of series and so we say they converge if the inner series always converges and the outer series then also converges.
  1. (Tonelli) Suppose \(x_{k,\ell} \geq 0\) for all \(k,\ell\text{.}\) Show that the three series i), ii), iii) either all diverge (to \(\infty\)) or they all converge to the same number. In the case of divergence, some of the ``inner’’ series might be infinity in which case we consider the entire sum to diverge.
  2. (Fubini) Suppose i) converges absolutely. Show that ii) and iii) converge and they both converge to the same number as i).
In case \(L=\infty\text{,}\) see Exercise 2.3.20. Alternatively, note that if \(L=\infty\text{,}\) then \(\bigl\{ {\sabs{x_{n}}}^{1/n} \bigr\}_{n=1}^\infty\) and thus \(\{ x_n \}_{n=1}^\infty\) is unbounded.
Proved by the German mathematician Franz Mertens (1840–1927).
Named for the Italian mathematician Ernesto Cesàro (1859–1906).
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf