Skip to main content

Section 8.3 The derivative

Note: 2–3 lectures

Subsection 8.3.1 The derivative

For a function \(f \colon \R \to \R\text{,}\) we defined the derivative at \(x\) as

\begin{equation*} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} . \end{equation*}

In other words, there is a number \(a\) (the derivative of \(f\) at \(x\)) such that

\begin{equation*} \lim_{h \to 0} \abs{\frac{f(x+h)-f(x)}{h} - a} = \lim_{h \to 0} \abs{\frac{f(x+h)-f(x) - ah}{h}} = \lim_{h \to 0} \frac{\sabs{f(x+h)-f(x) - ah}}{\sabs{h}} = 0. \end{equation*}

Multiplying by \(a\) is a linear map in one dimension: \(h \mapsto ah\text{.}\) Namely, we think of \(a \in L(\R^1,\R^1)\text{,}\) which is the best linear approximation of how \(f\) changes near \(x\text{.}\) We use this interpretation to extend differentiation to more variables.

Definition 8.3.1.

Let \(U \subset \R^n\) be open and \(f \colon U \to \R^m\) a function. We say \(f\) is differentiable at \(x \in U\) if there exists an \(A \in L(\R^n,\R^m)\) such that

\begin{equation*} \lim_{\substack{h \to 0\\h\in \R^n}} \frac{\snorm{f(x+h)-f(x) - Ah}}{\snorm{h}} = 0 . \end{equation*}

We write \(Df(x) := A\text{,}\) or \(f'(x) := A\text{,}\) and we say \(A\) is the derivative of \(f\) at \(x\text{.}\) When \(f\) is differentiable at every \(x \in U\text{,}\) we say simply that \(f\) is differentiable. See Figure 8.3 for an illustration.


Figure 8.3. Illustration of a derivative for a function \(f \colon \R^2 \to \R\text{.}\) The vector \(h\) is shown in the \(x_1x_2\)-plane based at \((x_1,x_2)\text{,}\) and the vector \(Ah \in \R^1\) is shown along the \(y\) direction.

For a differentiable function, the derivative of \(f\) is a function from \(U\) to \(L(\R^n,\R^m)\text{.}\) Compare to the one-dimensional case, where the derivative is a function from \(U\) to \(\R\text{,}\) but we really want to think of \(\R\) here as \(L(\R^1,\R^1)\text{.}\) As in one dimension, the idea is that a differentiable mapping is “infinitesimally close” to a linear mapping, and this linear mapping is the derivative.

Notice which norms are being used in the definition. The norm in the numerator is on \(\R^m\text{,}\) and the norm in the denominator is on \(\R^n\) where \(h\) lives. Normally it is understood that \(h \in \R^n\) from context (the formula makes no sense otherwise). We will not explicitly say so from now on.

We have again cheated somewhat and said that \(A\) is the derivative. We have not shown yet that there is only one, let us do that now.

Proof.

Suppose \(h \in \R^n\text{,}\) \(h \not= 0\text{.}\) Compute

\begin{equation*} \begin{split} \frac{\snorm{(A-B)h}}{\snorm{h}} & = \frac{\snorm{-\bigl(f(x+h)-f(x) - Ah\bigr) + f(x+h)-f(x) - Bh}}{\snorm{h}} \\ & \leq \frac{\snorm{f(x+h)-f(x) - Ah}}{\snorm{h}} + \frac{\snorm{f(x+h)-f(x) - Bh}}{\snorm{h}} . \end{split} \end{equation*}

So \(\frac{\snorm{(A-B)h}}{\snorm{h}} \to 0\) as \(h \to 0\text{.}\) Given \(\epsilon > 0\text{,}\) for all nonzero \(h\) in some \(\delta\)-ball around the origin we have

\begin{equation*} \epsilon > \frac{\snorm{(A-B)h}}{\snorm{h}} = \norm{(A-B)\frac{h}{\snorm{h}}} . \end{equation*}

For any given \(v \in \R^n\) with \(\snorm{v}=1\text{,}\) let \(h = (\nicefrac{\delta}{2}) \, v\text{,}\) then \(\snorm{h} < \delta\) and \(\frac{h}{\snorm{h}} = v\text{.}\) So \(\snorm{(A-B)v} < \epsilon\text{.}\) Taking the supremum over all \(v\) with \(\snorm{v} = 1\text{,}\) we get the operator norm \(\snorm{A-B} \leq \epsilon\text{.}\) As \(\epsilon > 0\) was arbitrary, \(\snorm{A-B} = 0\text{,}\) or in other words \(A = B\text{.}\)

Example 8.3.3.

If \(f(x) = Ax\) for a linear mapping \(A\text{,}\) then \(f'(x) = A\text{:}\)

\begin{equation*} \frac{\snorm{f(x+h)-f(x) - Ah}}{\snorm{h}} = \frac{\snorm{A(x+h)-Ax - Ah}}{\snorm{h}} = \frac{0}{\snorm{h}} = 0 . \end{equation*}

Example 8.3.4.

Let \(f \colon \R^2 \to \R^2\) be defined by

\begin{equation*} f(x,y) = \bigl(f_1(x,y),f_2(x,y)\bigr) := (1+x+2y+x^2,2x+3y+xy). \end{equation*}

Let us show that \(f\) is differentiable at the origin and let us compute the derivative, directly using the definition. If the derivative exists, it is in \(L(\R^2,\R^2)\text{,}\) so it can be represented by a \(2\)-by-\(2\) matrix \(\left[\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right]\text{.}\) Suppose \(h = (h_1,h_2)\text{.}\) We need the following expression to go to zero.

\begin{multline*} \frac{\snorm{ f(h_1,h_2)-f(0,0) - (ah_1 +bh_2 , ch_1+dh_2)} }{\snorm{(h_1,h_2)}} = \\ \frac{\sqrt{ {\bigl((1-a)h_1 + (2-b)h_2 + h_1^2\bigr)}^2 + {\bigl((2-c)h_1 + (3-d)h_2 + h_1h_2\bigr)}^2}}{\sqrt{h_1^2+h_2^2}} . \end{multline*}

If we choose \(a=1\text{,}\) \(b=2\text{,}\) \(c=2\text{,}\) \(d=3\text{,}\) the expression becomes

\begin{equation*} \frac{\sqrt{ h_1^4 + h_1^2h_2^2}}{\sqrt{h_1^2+h_2^2}} = \sabs{h_1} \frac{\sqrt{ h_1^2 + h_2^2}}{\sqrt{h_1^2+h_2^2}} = \sabs{h_1} . \end{equation*}

This expression does indeed go to zero as \(h \to 0\text{.}\) The function \(f\) is differentiable at the origin and the derivative \(f'(0)\) is represented by the matrix \(\left[\begin{smallmatrix}1&2\\2&3\end{smallmatrix}\right]\text{.}\)

Proof.

Another way to write the differentiability of \(f\) at \(p\) is to consider

\begin{equation*} r(h) := f(p+h)-f(p) - f'(p) h . \end{equation*}

The function \(f\) is differentiable at \(p\) if \(\frac{\snorm{r(h)}}{\snorm{h}}\) goes to zero as \(h \to 0\text{,}\) so \(r(h)\) itself goes to zero. The mapping \(h \mapsto f'(p) h\) is a linear mapping between finite-dimensional spaces, hence continuous and \(f'(p) h \to 0\) as \(h \to 0\text{.}\) Thus, \(f(p+h)\) must go to \(f(p)\) as \(h \to 0\text{.}\) That is, \(f\) is continuous at \(p\text{.}\)

The derivative is itself a linear operator on the space of differentiable functions.

Proof.

Let \(h \in \R^n\text{,}\) \(h \not= 0\text{.}\) Then

\begin{multline*} \frac{\norm{f(p+h)+g(p+h)-\bigl(f(p)+g(p)\bigr) - \bigl(f'(p) + g'(p)\bigr)h}}{\snorm{h}} \\ \leq \frac{\norm{f(p+h)-f(p) - f'(p)h}}{\snorm{h}} + \frac{\norm{g(p+h)-g(p) - g'(p)h}}{\snorm{h}} , \end{multline*}

and

\begin{equation*} \frac{\norm{\alpha f(p+h) - \alpha f(p) - \alpha f'(p)h}}{\snorm{h}} = \sabs{\alpha} \frac{\norm{f(p+h))-f(p) - f'(p)h}}{\snorm{h}} . \end{equation*}

The limits as \(h\) goes to zero of the right-hand sides are zero by hypothesis. The result follows.

If \(A \in L(\R^n,\R^m)\) and \(B \in L(\R^m,\R^k)\) are linear maps, then they are their own derivative. The composition \(BA \in L(\R^n,\R^k)\) is also its own derivative, and so the derivative of the composition is the composition of the derivatives. As differentiable maps are “infinitesimally close” to linear maps, they have the same property:

Without the points where things are evaluated, this is sometimes written as \(F' = {(g \circ f)}' = g' f'\text{.}\) The way to understand it is that the derivative of the composition \(g \circ f\) is the composition of the derivatives of \(g\) and \(f\text{.}\) If \(f'(p) = A\) and \(g'\bigl(f(p)\bigr) = B\text{,}\) then \(F'(p) = BA\text{,}\) just as for linear maps.

Proof.

Let \(A := f'(p)\) and \(B := g'\bigl(f(p)\bigr)\text{.}\) Take a nonzero \(h \in \R^n\) and write \(q := f(p)\text{,}\) \(k := f(p+h)-f(p)\text{.}\) Let

\begin{equation*} r(h) := f(p+h)-f(p) - A h . \end{equation*}

Then \(r(h) = k-Ah\) or \(Ah = k-r(h)\text{,}\) and \(f(p+h) = q+k\text{.}\) We look at the quantity we need to go to zero:

\begin{equation*} \begin{split} \frac{\snorm{F(p+h)-F(p) - BAh}}{\snorm{h}} & = \frac{\snorm{g\bigl(f(p+h)\bigr)-g\bigl(f(p)\bigr) - BAh}}{\snorm{h}} \\ & = \frac{\snorm{g(q+k)-g(q) - B\bigl(k-r(h)\bigr)}}{\snorm{h}} \\ & \leq \frac {\snorm{g(q+k)-g(q) - Bk}} {\snorm{h}} + \snorm{B} \frac {\snorm{r(h)}} {\snorm{h}} \\ & = \frac {\snorm{g(q+k)-g(q) - Bk}} {\snorm{k}} \frac {\snorm{f(p+h)-f(p)}} {\snorm{h}} + \snorm{B} \frac {\snorm{r(h)}} {\snorm{h}} . \end{split} \end{equation*}

First, \(\snorm{B}\) is a constant and \(f\) is differentiable at \(p\text{,}\) so the term \(\snorm{B}\frac{\snorm{r(h)}}{\snorm{h}}\) goes to 0. Next because \(f\) is continuous at \(p\text{,}\) then as \(h\) goes to 0, so \(k\) goes to 0. Thus \(\frac {\snorm{g(q+k)-g(q) - Bk}} {\snorm{k}}\) goes to 0, because \(g\) is differentiable at \(q\text{.}\) Finally,

\begin{equation*} \frac {\snorm{f(p+h)-f(p)}} {\snorm{h}} \leq \frac {\snorm{f(p+h)-f(p)-Ah}} {\snorm{h}} + \frac {\snorm{Ah}} {\snorm{h}} \leq \frac {\snorm{f(p+h)-f(p)-Ah}} {\snorm{h}} + \snorm{A} . \end{equation*}

As \(f\) is differentiable at \(p\text{,}\) for small enough \(h\text{,}\) the quantity \(\frac{\snorm{f(p+h)-f(p)-Ah}}{\snorm{h}}\) is bounded. Hence, the term \(\frac {\snorm{f(p+h)-f(p)}} {\snorm{h}}\) stays bounded as \(h\) goes to 0. Therefore, \(\frac{\snorm{F(p+h)-F(p) - BAh}}{\snorm{h}}\) goes to zero, and \(F'(p) = BA\text{,}\) which is what was claimed.

Subsection 8.3.2 Partial derivatives

There is another way to generalize the derivative from one dimension. We hold all but one variable constant and take the regular one-variable derivative.

Definition 8.3.8.

Let \(f \colon U \to \R\) be a function on an open set \(U \subset \R^n\text{.}\) If the following limit exists, we write

\begin{equation*} \frac{\partial f}{\partial x_j} (x) := \lim_{h\to 0}\frac{f(x_1,\ldots,x_{j-1},x_j+h,x_{j+1},\ldots,x_n)-f(x)}{h} = \lim_{h\to 0}\frac{f(x+h e_j)-f(x)}{h} . \end{equation*}

We call \(\frac{\partial f}{\partial x_j} (x)\) the partial derivative of \(f\) with respect to \(x_j\text{.}\) See Figure 8.4. Here \(h\) is a number, not a vector.

For a mapping \(f \colon U \to \R^m\) we write \(f = (f_1,f_2,\ldots,f_m)\text{,}\) where \(f_k\) are real-valued functions. We then take partial derivatives of the components, \(\frac{\partial f_k}{\partial x_j}\text{.}\)


Figure 8.4. Illustration of a partial derivative for a function \(f \colon \R^2 \to \R\text{.}\) The \(yx_2\)-plane where \(x_1\) is fixed is marked in dotted line, and the slope of the tangent line in the \(yx_2\)-plane is \(\frac{\partial f}{\partial x_2}(x_1,x_2)\text{.}\)

Partial derivatives are easier to compute with all the machinery of calculus, and they provide a way to compute the derivative of a function.

In other words,

\begin{equation*} f'(p) \, e_j = \sum_{k=1}^m \frac{\partial f_k}{\partial x_j}(p) \,e_k . \end{equation*}

If \(v = \sum_{j=1}^n c_j\, e_j = (c_1,c_2,\ldots,c_n)\text{,}\) then

\begin{equation*} f'(p) \, v = \sum_{j=1}^n \sum_{k=1}^m c_j \frac{\partial f_k}{\partial x_j}(p) \,e_k = \sum_{k=1}^m \left( \sum_{j=1}^n c_j \frac{\partial f_k}{\partial x_j}(p) \right) \,e_k . \end{equation*}

Proof.

Fix a \(j\) and note that for nonzero \(h\text{,}\)

\begin{equation*} \begin{split} \norm{\frac{f(p+h e_j)-f(p)}{h} - f'(p) \, e_j} & = \norm{\frac{f(p+h e_j)-f(p) - f'(p) \, h e_j}{h}} \\ & = \frac{\snorm{f(p+h e_j)-f(p) - f'(p) \, h e_j}}{\snorm{h e_j}} . \end{split} \end{equation*}

As \(h\) goes to 0, the right-hand side goes to zero by differentiability of \(f\text{,}\) and hence

\begin{equation*} \lim_{h \to 0} \frac{f(p+h e_j)-f(p)}{h} = f'(p) \, e_j . \end{equation*}

Let us represent \(f\) by components \(f = (f_1,f_2,\ldots,f_m)\text{,}\) since it is vector-valued. Taking a limit in \(\R^m\) is the same as taking the limit in each component separately. For every \(k\text{,}\) the partial derivative

\begin{equation*} \frac{\partial f_k}{\partial x_j} (p) = \lim_{h \to 0} \frac{f_k(p+h e_j)-f_k(p)}{h} \end{equation*}

exists and is equal to the \(k\)th component of \(f'(p)\, e_j\text{,}\) and we are done.

The converse of the proposition is not true. Just because the partial derivatives exist, does not mean that the function is differentiable. See the exercises. However, when the partial derivatives are continuous, we will prove that the converse holds. One of the consequences of the proposition is that if \(f\) is differentiable on \(U\text{,}\) then \(f' \colon U \to L(\R^n,\R^m)\) is a continuous function if and only if all the \(\frac{\partial f_k}{\partial x_j}\) are continuous functions.

Subsection 8.3.3 Gradients, curves, and directional derivatives

Let \(U \subset \R^n\) be open and \(f \colon U \to \R\) a differentiable function. We define the gradient as

\begin{equation*} \nabla f (x) := \sum_{j=1}^n \frac{\partial f}{\partial x_j} (x)\, e_j . \end{equation*}

The gradient gives a way to represent the action of the derivative as a dot product: \(f'(x)\,v = \nabla f(x) \cdot v\text{.}\)

Suppose \(\gamma \colon (a,b) \subset \R \to \R^n\) is differentiable. Such a function and its image is sometimes called a curve, or a differentiable curve. Write \(\gamma = (\gamma_1,\gamma_2,\ldots,\gamma_n)\text{.}\) For the purposes of computation, we identify \(L(\R^1)\) and \(\R\) as we did when we defined the derivative in one variable. We also identify \(L(\R^1,\R^n)\) with \(\R^n\text{.}\) We treat \(\gamma^{\:\prime}(t)\) both as an operator in \(L(\R^1,\R^n)\) and the vector \(\bigl(\gamma_1^{\:\prime}(t), \gamma_2^{\:\prime}(t),\ldots,\gamma_n^{\:\prime}(t)\bigr)\) in \(\R^n\text{.}\) Using Proposition 8.3.9, if \(v\in \R^n\) is \(\gamma^{\:\prime}(t)\) acting as a vector, then \(h \mapsto h \, v\) (for \(h \in \R^1 = \R\)) is \(\gamma^{\:\prime}(t)\) acting as an operator in \(L(\R^1,\R^n)\text{.}\) We often use this slight abuse of notation when dealing with curves. The vector \(\gamma^{\:\prime}(t)\) is called a tangent vector. See Figure 8.5.


Figure 8.5. Differentiable curve and its derivative as a vector (for clarity assuming \(\gamma\) defined on \([a,b]\)). The tangent vector \(\gamma^{\:\prime}(t)\) points along the curve.

Suppose \(\gamma\bigl((a,b)\bigr) \subset U\) and let

\begin{equation*} g(t) := f\bigl(\gamma(t)\bigr) . \end{equation*}

The function \(g\) is differentiable. Treating \(g'(t)\) as a number,

\begin{equation*} g'(t) = f'\bigl(\gamma(t)\bigr) \gamma^{\:\prime}(t) = \sum_{j=1}^n \frac{\partial f}{\partial x_j} \bigl(\gamma(t)\bigr) \frac{d\gamma_j}{dt} (t) = \sum_{j=1}^n \frac{\partial f}{\partial x_j} \frac{d\gamma_j}{dt} . \end{equation*}

For convenience, we often leave out the points where we are evaluating, such as above on the far right-hand side. With the notation of the gradient and the dot product the equation becomes

\begin{equation*} g'(t) = (\nabla f) \bigl(\gamma(t)\bigr) \cdot \gamma^{\:\prime}(t) = \nabla f \cdot \gamma^{\:\prime}. \end{equation*}

We use this idea to define derivatives in a specific direction. A direction is simply a vector pointing in that direction. Pick a vector \(u \in \R^n\) such that \(\snorm{u} = 1\text{,}\) and fix \(x \in U\text{.}\) We define the directional derivative as

\begin{equation*} D_u f (x) := \frac{d}{dt}\Big|_{t=0} \bigl[ f(x+tu) \bigr] = \lim_{h\to 0} \frac{f(x+hu)-f(x)}{h} , \end{equation*}

where the notation \(\frac{d}{dt}\big|_{t=0}\) represents the derivative evaluated at \(t=0\text{.}\) Taking the standard basis vector \(e_j\) we find \(\frac{\partial f}{\partial x_j} = D_{e_j} f\text{.}\) For this reason, sometimes the notation \(\frac{\partial f}{\partial u}\) is used instead of \(D_u f\text{.}\)

Let \(\gamma\) be defined by

\begin{equation*} \gamma(t) := x + tu . \end{equation*}

Then \(\gamma^{\:\prime}(t) = u\) for all \(t\text{.}\) Let us see what happens to \(f\) when we travel along \(\gamma\text{:}\)

\begin{equation*} D_u f (x) = \frac{d}{dt}\Big|_{t=0} \bigl[ f(x+tu) \bigr] = (\nabla f) \bigl(\gamma(0)\bigr) \cdot \gamma^{\:\prime}(0) = (\nabla f) (x) \cdot u . \end{equation*}

In fact, this computation holds whenever \(\gamma\) is any curve such that \(\gamma(0) = x\) and \(\gamma^{\:\prime}(0) = u\text{.}\)

Suppose \((\nabla f)(x) \neq 0\text{.}\) By the Cauchy–Schwarz inequality,

\begin{equation*} \sabs{D_u f(x)} \leq \snorm{(\nabla f)(x)} . \end{equation*}

Equality is achieved when \(u\) is a scalar multiple of \((\nabla f)(x)\text{.}\) That is, when

\begin{equation*} u = \frac{(\nabla f)(x)}{\snorm{(\nabla f)(x)}} , \end{equation*}

we get \(D_u f(x) = \snorm{(\nabla f)(x)}\text{.}\) The gradient points in the direction in which the function grows fastest, in other words, in the direction in which \(D_u f(x)\) is maximal.

Subsection 8.3.4 The Jacobian

Definition 8.3.10.

Let \(U \subset \R^n\) and \(f \colon U \to \R^n\) be a differentiable mapping. Define the Jacobian 1 , or the Jacobian determinant 3 , of \(f\) at \(x\) as

\begin{equation*} J_f(x) := \det\bigl( f'(x) \bigr) . \end{equation*}

Sometimes \(J_f\) is written as

\begin{equation*} \frac{\partial(f_1,f_2,\ldots,f_n)}{\partial(x_1,x_2,\ldots,x_n)} . \end{equation*}

This last piece of notation may seem somewhat confusing, but it is quite useful when we need to specify the exact variables and function components used, as we will do, for example, in the implicit function theorem.

The Jacobian \(J_f\) is a real-valued function, and when \(n=1\) it is simply the derivative. From the chain rule and the fact that \(\det(AB) = \det(A)\det(B)\text{,}\) it follows that:

\begin{equation*} J_{f \circ g} (x) = J_f\bigl(g(x)\bigr) J_g(x) . \end{equation*}

The determinant of a linear mapping tells us what happens to area/volume under the mapping. Similarly, the Jacobian measures how much a differentiable mapping stretches things locally, and if it flips orientation. In particular, if the Jacobian is non-zero than we would assume that locally the mapping is invertible (and we would be correct as we will later see).

Subsection 8.3.5 Exercises

Exercise 8.3.1.

Suppose \(\gamma \colon (-1,1) \to \R^n\) and \(\alpha \colon (-1,1) \to \R^n\) are two differentiable curves such that \(\gamma(0) = \alpha(0)\) and \(\gamma^{\:\prime}(0) = \alpha'(0)\text{.}\) Suppose \(F \colon \R^n \to \R\) is a differentiable function. Show that

\begin{equation*} \frac{d}{dt}\Big|_{t=0} F\bigl(\gamma(t)\bigr) = \frac{d}{dt}\Big|_{t=0} F\bigl(\alpha(t)\bigr) . \end{equation*}

Exercise 8.3.2.

Let \(f \colon \R^2 \to \R\) be given by \(f(x,y) := \sqrt{x^2+y^2}\text{,}\) see Figure 8.6. Show that \(f\) is not differentiable at the origin.


Figure 8.6. Graph of \(\sqrt{x^2+y^2}\text{.}\)

Exercise 8.3.3.

Using only the definition of the derivative, show that the following \(f \colon \R^2 \to \R^2\) are differentiable at the origin and find their derivative.

  1. \(f(x,y) := (1+x+xy,x)\text{,}\)

  2. \(f(x,y) := \bigl(y-y^{10},x \bigr)\text{,}\)

  3. \(f(x,y) := \bigl( {(x+y+1)}^2 , {(x-y+2)}^2 \bigr)\text{.}\)

Exercise 8.3.4.

Suppose \(f \colon \R \to \R\) and \(g \colon \R \to \R\) are differentiable functions. Using only the definition of the derivative, show that \(h \colon \R^2 \to \R^2\) defined by \(h(x,y) := \bigl(f(x),g(y)\bigr)\) is a differentiable function, and find the derivative, at all points \((x,y)\text{.}\)

Exercise 8.3.5.

Define a function \(f \colon \R^2 \to \R\) by (see Figure 8.7)

\begin{equation*} f(x,y) := \begin{cases} \frac{xy}{x^2+y^2} & \text{if } (x,y) \not= (0,0), \\ 0 & \text{if } (x,y) = (0,0). \end{cases} \end{equation*}
  1. Show that the partial derivatives \(\frac{\partial f}{\partial x}\) and \(\frac{\partial f}{\partial y}\) exist at all points (including the origin).

  2. Show that \(f\) is not continuous at the origin (and hence not differentiable).


Figure 8.7. Graph of \(\frac{xy}{x^2+y^2}\text{.}\)

Exercise 8.3.6.

Define a function \(f \colon \R^2 \to \R\) by (see Figure 8.8)

\begin{equation*} f(x,y) := \begin{cases} \frac{x^2y}{x^2+y^2} & \text{if } (x,y) \not= (0,0), \\ 0 & \text{if } (x,y) = (0,0). \end{cases} \end{equation*}
  1. Show that the partial derivatives \(\frac{\partial f}{\partial x}\) and \(\frac{\partial f}{\partial y}\) exist at all points.

  2. Show that for all \(u \in \R^2\) with \(\snorm{u}=1\text{,}\) the directional derivative \(D_u f\) exists at all points.

  3. Show that \(f\) is continuous at the origin.

  4. Show that \(f\) is not differentiable at the origin.


Figure 8.8. Graph of \(\frac{x^2y}{x^2+y^2}\text{.}\)

Exercise 8.3.7.

Suppose \(f \colon \R^n \to \R^n\) is one-to-one, onto, differentiable at all points, and such that \(f^{-1}\) is also differentiable at all points.

  1. Show that \(f'(p)\) is invertible at all points \(p\) and compute \({(f^{-1})}'\bigl(f(p)\bigr)\text{.}\) Hint: Consider \(x = f^{-1}\bigl(f(x)\bigr)\text{.}\)

  2. Let \(g \colon \R^n \to \R^n\) be a function differentiable at \(q \in \R^n\) and such that \(g(q)=q\text{.}\) Suppose \(f(p) = q\) for some \(p \in \R^n\text{.}\) Show \(J_g(q) = J_{f^{-1} \circ g \circ f}(p)\) where \(J_g\) is the Jacobian determinant.

Exercise 8.3.8.

Suppose \(f \colon \R^2 \to \R\) is differentiable and such that \(f(x,y) = 0\) if and only if \(y=0\) and such that \(\nabla f(0,0) = (0,1)\text{.}\) Prove that \(f(x,y) > 0\) whenever \(y > 0\text{,}\) and \(f(x,y) < 0\) whenever \(y < 0\text{.}\)

As for functions of one variable, \(f \colon U \to \R\) has a relative maximum at \(p \in U\) if there exists a \(\delta >0\) such that \(f(q) \leq f(p)\) for all \(q \in B(p,\delta) \cap U\text{.}\) Similarly for relative minimum.

Exercise 8.3.9.

Suppose \(U \subset \R^n\) is open and \(f \colon U \to \R\) is differentiable. Suppose \(f\) has a relative maximum at \(p \in U\text{.}\) Show that \(f'(p) = 0\text{,}\) that is the zero mapping in \(L(\R^n,\R)\text{.}\) That is \(p\) is a critical point of \(f\text{.}\)

Exercise 8.3.10.

Suppose \(f \colon \R^2 \to \R\) is differentiable and \(f(x,y) = 0\) whenever \(x^2+y^2 = 1\text{.}\) Prove that there exists at least one point \((x_0,y_0)\) such that \(\frac{\partial f}{\partial x}(x_0,y_0) = \frac{\partial f}{\partial y}(x_0,y_0) = 0\text{.}\)

Exercise 8.3.11.

Define \(f(x,y) := ( x-y^2 ) ( 2 y^2 - x)\text{.}\) The graph of \(f\) is called the Peano surface. 4 

  1. Show that \((0,0)\) is a critical point, that is \(f'(0,0) = 0\text{,}\) that is the zero linear map in \(L(\R^2,\R)\text{.}\)

  2. Show that for every direction the restriction of \(f\) to a line through the origin in that direction has a relative maximum at the origin. In other words, for every \((x,y)\) such that \(x^2+y^2=1\text{,}\) the function \(g(t) := f(tx,ty)\text{,}\) has a relative maximum at \(t=0\text{.}\)
    Hint: While not necessary Section 4.3 makes this part easier.

  3. Show that \(f\) does not have a relative maximum at \((0,0)\text{.}\)

Exercise 8.3.12.

Suppose \(f \colon \R \to \R^n\) is differentiable and \(\snorm{f(t)} = 1\) for all \(t\) (that is, we have a curve in the unit sphere). Show that \(f'(t) \cdot f(t) = 0\) (treating \(f'\) as a vector) for all \(t\text{.}\)

Exercise 8.3.13.

Define \(f \colon \R^2 \to \R^2\) by \(f(x,y) := \bigl(x,y+\varphi(x)\bigr)\) for some differentiable function \(\varphi\) of one variable. Show \(f\) is differentiable and find \(f'\text{.}\)

Exercise 8.3.14.

Suppose \(U \subset \R^n\) is open, \(p \in U\text{,}\) and \(f \colon U \to \R\text{,}\) \(g \colon U \to \R\text{,}\) \(h \colon U \to \R\) are functions such that \(f(p) = g(p) = h(p)\text{,}\) \(f\) and \(h\) are differentiable at \(p\text{,}\) \(f'(p) = h'(p)\text{,}\) and

\begin{equation*} f(x) \leq g(x) \leq h(x) \end{equation*}

for all \(x \in U\text{.}\) Show that \(g\) is differentiable at \(p\) and \(g'(p) = f'(p) = h'(p)\text{.}\)

Named after the Italian mathematician Carl Gustav Jacob Jacobi 2  (1804–1851).
https://en.wikipedia.org/wiki/Carl_Gustav_Jacob_Jacobi
The matrix from Proposition 8.3.9 representing \(f'(x)\) is sometimes called the Jacobian matrix.
Named after the Italian mathematician Giuseppe Peano 5  (1858–1932).
https://en.wikipedia.org/wiki/Giuseppe_Peano
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf