## Section8.5Inverse and implicit function theorems

Note: 2–3 lectures

To prove the inverse function theorem we use the contraction mapping principle from Chapter 7, where we used it to prove Picard's theorem. Recall that a mapping $$f \colon X \to Y$$ between two metric spaces $$(X,d_X)$$ and $$(Y,d_Y)$$ is called a contraction if there exists a $$k < 1$$ such that

\begin{equation*} d_Y\bigl(f(p),f(q)\bigr) \leq k \, d_X(p,q) \qquad \text{for all } p,q \in X. \end{equation*}

The contraction mapping principle says that if $$f \colon X \to X$$ is a contraction and $$X$$ is a complete metric space, then there exists a unique fixed point, that is, there exists a unique $$x \in X$$ such that $$f(x) = x\text{.}$$

Intuitively, if a function is continuously differentiable, then it locally “behaves like” the derivative (which is a linear function). The idea of the inverse function theorem is that if a function is continuously differentiable and the derivative is invertible, the function is (locally) invertible. Figure 8.10. Setup of the inverse function theorem in $$\R^n\text{.}$$

### Proof.

Write $$A = f'(p)\text{.}$$ As $$f'$$ is continuous, there exists an open ball $$V$$ around $$p$$ such that

\begin{equation*} \snorm{A-f'(x)} < \frac{1}{2\snorm{A^{-1}}} \qquad \text{for all } x \in V. \end{equation*}

Consequently, the derivative $$f'(x)$$ is invertible for all $$x \in V$$ by Proposition 8.2.6.

Given $$y \in \R^n\text{,}$$ we define $$\varphi_y \colon V \to \R^n$$ by

\begin{equation*} \varphi_y (x) := x + A^{-1}\bigl(y-f(x)\bigr) . \end{equation*}

As $$A^{-1}$$ is one-to-one, $$\varphi_y(x) = x$$ ($$x$$ is a fixed point) if only if $$y-f(x) = 0\text{,}$$ or in other words $$f(x)=y\text{.}$$ Using the chain rule we obtain

\begin{equation*} \varphi_y'(x) = I - A^{-1} f'(x) = A^{-1} \bigl( A-f'(x) \bigr) . \end{equation*}

So for $$x \in V\text{,}$$ we have

\begin{equation*} \snorm{\varphi_y'(x)} \leq \snorm{A^{-1}} \, \snorm{A-f'(x)} < \nicefrac{1}{2} . \end{equation*}

As $$V$$ is a ball, it is convex. Hence

\begin{equation*} \snorm{\varphi_y(x_1)-\varphi_y(x_2)} \leq \frac{1}{2} \snorm{x_1-x_2} \qquad \text{for all } x_1,x_2 \in V. \end{equation*}

In other words, $$\varphi_y$$ is a contraction defined on $$V\text{,}$$ though we so far do not know what is the range of $$\varphi_y\text{.}$$ We cannot yet apply the fixed point theorem, but we can say that $$\varphi_y$$ has at most one fixed point in $$V\text{:}$$ If $$\varphi_y(x_1) = x_1$$ and $$\varphi_y(x_2) = x_2\text{,}$$ then $$\snorm{x_1-x_2} = \snorm{\varphi_y(x_1)-\varphi_y(x_2)} \leq \frac{1}{2} \snorm{x_1-x_2}\text{,}$$ so $$x_1 = x_2\text{.}$$ That is, there exists at most one $$x \in V$$ such that $$f(x) = y\text{,}$$ and so $$f|_V$$ is one-to-one.

Let $$W := f(V)$$ and let $$g \colon W \to V$$ be the inverse of $$f|_V\text{.}$$ We need to show that $$W$$ is open. Take a $$y_0 \in W\text{.}$$ There is a unique $$x_0 \in V$$ such that $$f(x_0) = y_0\text{.}$$ Let $$r > 0$$ be small enough such that the closed ball $$C(x_0,r) \subset V$$ (such $$r > 0$$ exists as $$V$$ is open).

Suppose $$y$$ is such that

\begin{equation*} \snorm{y-y_0} < \frac{r}{2\snorm{A^{-1}}} . \end{equation*}

If we show that $$y \in W\text{,}$$ then we have shown that $$W$$ is open. If $$x_1 \in C(x_0,r)\text{,}$$ then

\begin{equation*} \begin{split} \snorm{\varphi_y(x_1)-x_0} & \leq \snorm{\varphi_y(x_1)-\varphi_y(x_0)} + \snorm{\varphi_y(x_0)-x_0} \\ & \leq \frac{1}{2}\snorm{x_1-x_0} + \snorm{A^{-1}(y-y_0)} \\ & \leq \frac{1}{2}r + \snorm{A^{-1}} \, \snorm{y-y_0} \\ & < \frac{1}{2}r + \snorm{A^{-1}} \frac{r}{2\snorm{A^{-1}}} = r . \end{split} \end{equation*}

So $$\varphi_y$$ takes $$C(x_0,r)$$ into $$B(x_0,r) \subset C(x_0,r)\text{.}$$ It is a contraction on $$C(x_0,r)$$ and $$C(x_0,r)$$ is complete (closed subset of $$\R^n$$ is complete). Apply the contraction mapping principle to obtain a fixed point $$x\text{,}$$ i.e., $$\varphi_y(x) = x\text{.}$$ That is, $$f(x) = y\text{,}$$ and $$y \in f\bigl(C(x_0,r)\bigr) \subset f(V) = W\text{.}$$ Therefore $$W$$ is open.

Next we need to show that $$g$$ is continuously differentiable and compute its derivative. First, let us show that it is differentiable. Let $$y \in W$$ and $$k \in \R^n\text{,}$$ $$k\not= 0\text{,}$$ such that $$y+k \in W\text{.}$$ Because $$f|_V$$ is a one-to-one and onto mapping of $$V$$ onto $$W\text{,}$$ there are unique $$x \in V$$ and $$h \in \R^n\text{,}$$ $$h \not= 0$$ and $$x+h \in V\text{,}$$ such that $$f(x) = y$$ and $$f(x+h) = y+k\text{.}$$ In other words, $$g(y) = x$$ and $$g(y+k) = x+h\text{.}$$ See Figure 8.11. Figure 8.11. Proving that $$g$$ is differentiable.

We can still squeeze some information from the fact that $$\varphi_y$$ is a contraction.

\begin{equation*} \varphi_y(x+h)-\varphi_y(x) = h + A^{-1} \bigl( f(x)-f(x+h) \bigr) = h - A^{-1} k . \end{equation*}

So

\begin{equation*} \snorm{h-A^{-1}k} = \snorm{\varphi_y(x+h)-\varphi_y(x)} \leq \frac{1}{2}\snorm{x+h-x} = \frac{\snorm{h}}{2}. \end{equation*}

By the inverse triangle inequality, $$\snorm{h} - \snorm{A^{-1}k} \leq \frac{1}{2}\snorm{h}\text{.}$$ So

\begin{equation*} \snorm{h} \leq 2 \snorm{A^{-1}k} \leq 2 \snorm{A^{-1}} \, \snorm{k}. \end{equation*}

In particular, as $$k$$ goes to 0, so does $$h\text{.}$$

As $$x \in V\text{,}$$ then $$f'(x)$$ is invertible. Let $$B := \bigl(f'(x)\bigr)^{-1}\text{,}$$ which is what we think the derivative of $$g$$ at $$y$$ is. Then

\begin{equation*} \begin{split} \frac{\snorm{g(y+k)-g(y)-Bk}}{\snorm{k}} & = \frac{\snorm{h-Bk}}{\snorm{k}} \\ & = \frac{\snorm{h-B\bigl(f(x+h)-f(x)\bigr)}}{\snorm{k}} \\ & = \frac{\snorm{B\bigl(f(x+h)-f(x)-f'(x)h\bigr)}}{\snorm{k}} \\ & \leq \snorm{B} \frac{\snorm{h}}{\snorm{k}}\, \frac{\snorm{f(x+h)-f(x)-f'(x)h}}{\snorm{h}} \\ & \leq 2\snorm{B} \, \snorm{A^{-1}} \frac{\snorm{f(x+h)-f(x)-f'(x)h}}{\snorm{h}} . \end{split} \end{equation*}

As $$k$$ goes to 0, so does $$h\text{.}$$ So the right-hand side goes to 0 as $$f$$ is differentiable, and hence the left-hand side also goes to 0. And $$B$$ is precisely what we wanted $$g'(y)$$ to be.

We have $$g$$ is differentiable, let us show it is $$C^1(W)\text{.}$$ The function $$g \colon W \to V$$ is continuous (it is differentiable), $$f'$$ is a continuous function from $$V$$ to $$L(\R^n)\text{,}$$ and $$X \mapsto X^{-1}$$ is a continuous function on the set of invertible operators. As $$g'(y) = {\bigl( f'\bigl(g(y)\bigr)\bigr)}^{-1}$$ is the composition of these three continuous functions, it is continuous.

### Proof.

Without loss of generality, suppose $$U=V\text{.}$$ For each point $$y \in f(V)\text{,}$$ we pick $$x \in f^{-1}(y)$$ (there could be more than one such point), then by the inverse function theorem there is a neighborhood of $$x$$ in $$V$$ that maps onto a neighborhood of $$y\text{.}$$ Hence $$f(V)$$ is open.

### Example8.5.3.

The theorem, and the corollary, is not true if $$f'(x)$$ is not invertible for some $$x\text{.}$$ For example, the map $$f(x,y) := (x,xy)\text{,}$$ maps $$\R^2$$ onto the set $$\R^2 \setminus \bigl\{ (0,y) : y \neq 0 \bigr\}\text{,}$$ which is neither open nor closed. In fact $$f^{-1}(0,0) = \bigl\{ (0,y) : y \in \R \bigr\}\text{.}$$ This bad behavior only occurs on the $$y$$-axis, everywhere else the function is locally invertible. If we avoid the $$y$$-axis, $$f$$ is even one-to-one.

### Example8.5.4.

Just because $$f'(x)$$ is invertible everywhere does not mean that $$f$$ is one-to-one globally. It is “locally” one-to-one but perhaps not “globally.” For an example, take the map $$f \colon \R^2 \setminus \bigl\{ (0,0) \bigr\} \to \R^2 \setminus \bigl\{ (0,0) \bigr\}$$ defined by $$f(x,y) := (x^2-y^2,2xy)\text{.}$$ It is left to student to show that $$f$$ is differentiable and the derivative is invertible.

On the other hand, the mapping $$f$$ is 2-to-1 globally. For every $$(a,b)$$ that is not the origin, there are exactly two solutions to $$x^2-y^2=a$$ and $$2xy=b$$ (it is also onto). We leave it to the student to show that there is at least one solution, and then notice that replacing $$x$$ and $$y$$ with $$-x$$ and $$-y$$ we obtain another solution.

The invertibility of the derivative is not a necessary condition, just sufficient, for having a continuous inverse and being an open mapping. For example, the function $$f(x) := x^3$$ is an open mapping from $$\R$$ to $$\R$$ and is globally one-to-one with a continuous inverse, although the inverse is not differentiable at $$x=0\text{.}$$

As a side note, there is a related famous, and as yet unsolved problem, called the Jacobian conjecture. If $$F \colon \R^n \to \R^n$$ is polynomial (each component is a polynomial) and $$J_F$$ is a nonzero constant, does $$F$$ have a polynomial inverse? The inverse function theorem gives a local $$C^1$$ inverse, but can one always find a global polynomial inverse is the question.

### Subsection8.5.1Implicit function theorem

The inverse function theorem is really a special case of the implicit function theorem, which we prove next. Although somewhat ironically we prove the implicit function theorem using the inverse function theorem. In the inverse function theorem we showed that the equation $$x-f(y) = 0$$ is solvable for $$y$$ in terms of $$x$$ if the derivative in terms of $$y$$ is invertible, that is if $$f'(y)$$ is invertible. Then there is (locally) a function $$g$$ such that $$x-f\bigl(g(x)\bigr) = 0\text{.}$$

OK, so how about the equation $$f(x,y) = 0\text{.}$$ This equation is not solvable for $$y$$ in terms of $$x$$ in every case. For example, there is no solution when $$f(x,y)$$ does not actually depend on $$y\text{.}$$ For a slightly more complicated example, notice that $$x^2+y^2-1 = 0$$ defines the unit circle, and we can locally solve for $$y$$ in terms of $$x$$ when 1) we are near a point that lies on the unit circle and 2) when we are not at a point where the circle has a vertical tangency, or in other words where $$\frac{\partial f}{\partial y} = 0\text{.}$$

To make things simple, we fix some notation. We let $$(x,y) \in \R^{n+m}$$ denote the coordinates $$(x_1,\ldots,x_n,y_1,\ldots,y_m)\text{.}$$ A linear transformation $$A \in L(\R^{n+m},\R^m)$$ can then be written as $$A = [ A_x ~ A_y ]$$ so that $$A(x,y) = A_x x + A_y y\text{,}$$ where $$A_x \in L(\R^n,\R^m)$$ and $$A_y \in L(\R^m)\text{.}$$

The proof is immediate: We solve and obtain $$y = Bx\text{.}$$ Another way to solve is to “complete the basis,” that is, add rows to the matrix until we have an invertible matrix. In this case, we construct a mapping $$(x,y) \mapsto (x,A_x x + A_y y)\text{,}$$ and find that this operator in $$L(\R^{n+m})$$ is invertible, and the map $$B$$ can be read off from the inverse. Let us show that the same can be done for $$C^1$$ functions.

The condition $$\frac{\partial(f_1,\ldots,f_m)}{\partial(y_1,\ldots,y_m)} (p,q) = \det(A_y) \neq 0$$ simply means that $$A_y$$ is invertible. If $$n=m=1\text{,}$$ the condition becomes $$\frac{\partial f}{\partial y}(p,q) \not= 0\text{,}$$ $$W$$ and $$W'$$ are open intervals. See Figure 8.12. Figure 8.12. Implicit function theorem for $$f(x,y) = x^2+y^2-1$$ in $$U=\R^2$$ and $$(p,q)$$ in the first quadrant.

#### Proof.

Define $$F \colon U \to \R^{n+m}$$ by $$F(x,y) := \bigl(x,f(x,y)\bigr)\text{.}$$ It is clear that $$F$$ is $$C^1\text{,}$$ and we want to show that the derivative at $$(p,q)$$ is invertible.

Let us compute the derivative. The quotient

\begin{equation*} \frac{\snorm{f(p+h,q+k) - f(p,q) - A_x h - A_y k}}{\snorm{(h,k)}} \end{equation*}

goes to zero as $$\snorm{(h,k)} = \sqrt{\snorm{h}^2+\snorm{k}^2}$$ goes to zero. But then so does

\begin{equation*} \begin{split} \frac{\snorm{F(p+h,q+k)-F(p,q) - (h,A_x h+A_y k)}}{\snorm{(h,k)}} & = \frac{\snorm{\bigl(h,f(p+h,q+k)-f(p,q)\bigr) - (h,A_x h+A_y k)}}{\snorm{(h,k)}} \\ & = \frac{\snorm{f(p+h,q+k) - f(p,q) - A_x h - A_y k}}{\snorm{(h,k)}} . \end{split} \end{equation*}

So the derivative of $$F$$ at $$(p,q)$$ takes $$(h,k)$$ to $$(h,A_x h+A_y k)\text{.}$$ In block matrix form, it is $$\left[\begin{smallmatrix}I & 0\\A_x & A_y\end{smallmatrix}\right]\text{.}$$ If $$(h,A_x h+A_y k) = (0,0)\text{,}$$ then $$h=0\text{,}$$ and so $$A_y k = 0\text{.}$$ As $$A_y$$ is one-to-one, $$k=0\text{.}$$ Thus $$F'(p,q)$$ is one-to-one or in other words invertible, and we apply the inverse function theorem.

That is, there exists an open set $$V \subset \R^{n+m}$$ with $$F(p,q) = (p,0) \in V\text{,}$$ and a $$C^1$$ mapping $$G \colon V \to \R^{n+m}\text{,}$$ such that $$F\bigl(G(x,s)\bigr) = (x,s)$$ for all $$(x,s) \in V\text{,}$$ $$G$$ is one-to-one, and $$G(V)$$ is open. Write $$G = (G_1,G_2)$$ (the first $$n$$ and the second $$m$$ components of $$G$$). Then

\begin{equation*} F\bigl(G_1(x,s),G_2(x,s)\bigr) = \Bigl(G_1(x,s),f\bigl(G_1(x,s),G_2(x,s) \bigr)\Bigr) = (x,s) . \end{equation*}

So $$x = G_1(x,s)$$ and $$f\bigl(G_1(x,s),G_2(x,s)\bigr) = f\bigl(x,G_2(x,s)\bigr) = s\text{.}$$ Plugging in $$s=0\text{,}$$ we obtain

\begin{equation*} f\bigl(x,G_2(x,0)\bigr) = 0 . \end{equation*}

As the set $$G(V)$$ is open and $$(p,q) \in G(V)\text{,}$$ there exist some open sets $$\widetilde{W}$$ and $$W'$$ such that $$\widetilde{W} \times W' \subset G(V)$$ with $$p \in \widetilde{W}$$ and $$q \in W'\text{.}$$ Take $$W := \bigl\{ x \in \widetilde{W} : G_2(x,0) \in W' \bigr\}\text{.}$$ The function that takes $$x$$ to $$G_2(x,0)$$ is continuous and therefore $$W$$ is open. Define $$g \colon W \to \R^m$$ by $$g(x) := G_2(x,0)\text{,}$$ which is the $$g$$ in the theorem. The fact that $$g(x)$$ is the unique point in $$W'$$ follows because $$W \times W' \subset G(V)$$ and $$G$$ is one-to-one.

Next, differentiate

\begin{equation*} x\mapsto f\bigl(x,g(x)\bigr) \end{equation*}

at $$p\text{,}$$ which is the zero map, so its derivative is zero. Using the chain rule,

\begin{equation*} 0 = A\bigl(h,g'(p)h\bigr) = A_xh + A_yg'(p)h \end{equation*}

for all $$h \in \R^{n}\text{,}$$ and we obtain the desired derivative for $$g\text{.}$$

In other words, in the context of the theorem, we have $$m$$ equations in $$n+m$$ unknowns:

\begin{equation*} \begin{aligned} & f_1 (x_1,\ldots,x_n,y_1,\ldots,y_m) = 0 , \\ & f_2 (x_1,\ldots,x_n,y_1,\ldots,y_m) = 0 , \\ & \qquad \qquad \qquad \vdots \\ & f_m (x_1,\ldots,x_n,y_1,\ldots,y_m) = 0 . \end{aligned} \end{equation*}

The condition guaranteeing a solution is that $$f$$ is a $$C^1$$ mapping (all the components are $$C^1\text{:}$$ partial derivatives in all variables exist and are continuous) and that the matrix

\begin{equation*} \begin{bmatrix} \frac{\partial f_1}{\partial y_1} & \frac{\partial f_1}{\partial y_2} & \ldots & \frac{\partial f_1}{\partial y_m} \\[6pt] \frac{\partial f_2}{\partial y_1} & \frac{\partial f_2}{\partial y_2} & \ldots & \frac{\partial f_2}{\partial y_m} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_m}{\partial y_1} & \frac{\partial f_m}{\partial y_2} & \ldots & \frac{\partial f_m}{\partial y_m} \end{bmatrix} \end{equation*}

is invertible at $$(p,q)\text{.}$$

#### Example8.5.7.

Consider the set given by $$x^2+y^2-{(z+1)}^3 = -1$$ and $$e^x+e^y+e^z = 3$$ near the point $$(0,0,0)\text{.}$$ It is the zero set of the mapping

\begin{equation*} f(x,y,z) = \bigl(x^2+y^2-{(z+1)}^3+1,e^x+e^y+e^z-3\bigr) , \end{equation*}

whose derivative is

\begin{equation*} f' = \begin{bmatrix} 2x & 2y & -3{(z+1)}^2 \\ e^x & e^y & e^z \end{bmatrix} . \end{equation*}

The matrix

\begin{equation*} \begin{bmatrix} 2(0) & -3{(0+1)}^2 \\ e^0 & e^0 \end{bmatrix} = \begin{bmatrix} 0 & -3 \\ 1 & 1 \end{bmatrix} \end{equation*}

is invertible. Hence near $$(0,0,0)$$ we can solve for $$y$$ and $$z$$ as $$C^1$$ functions of $$x$$ such that for $$x$$ near $$0\text{,}$$ we have

\begin{equation*} x^2+y(x)^2-{\bigl(z(x)+1\bigr)}^3 = -1, \qquad e^x+e^{y(x)}+e^{z(x)} = 3 . \end{equation*}

The theorem does not tell us how to find $$y(x)$$ and $$z(x)$$ explicitly, it just tells us they exist. In other words, near the origin the set of solutions is a smooth curve in $$\R^3$$ that goes through the origin.

An interesting observation from the proof is that we solved the equation $$f\bigl(x,g(x)\bigr) = s$$ for all $$s$$ in some neighborhood of $$0\text{,}$$ not just $$s=0\text{.}$$

#### Remark8.5.8.

There are versions of the theorem for arbitrarily many derivatives. If $$f$$ has $$k$$ continuous derivatives, then the solution also has $$k$$ continuous derivatives. See also the next section.

### Subsection8.5.2Exercises

#### Exercise8.5.1.

Let $$C := \bigl\{ (x,y) \in \R^2 : x^2+y^2 = 1 \bigr\}\text{.}$$

1. Solve for $$y$$ in terms of $$x$$ near $$(0,1)$$ (that is, find the function $$g$$ from the implicit function theorem for a neighborhood of the point $$(p,q) = (0,1)$$).

2. Solve for $$y$$ in terms of $$x$$ near $$(0,-1)\text{.}$$

3. Solve for $$x$$ in terms of $$y$$ near $$(-1,0)\text{.}$$

#### Exercise8.5.2.

Define $$f \colon \R^2 \to \R^2$$ by $$f(x,y) := \bigl(x,y+h(x)\bigr)$$ for some continuously differentiable function $$h$$ of one variable.

1. Show that $$f$$ is one-to-one and onto.

2. Compute $$f'\text{.}$$

3. Show that $$f'$$ is invertible at all points, and compute its inverse.

#### Exercise8.5.3.

Define $$f \colon \R^2 \to \R^2 \setminus \bigl\{ (0,0) \bigr\}$$ by $$f(x,y) := \bigl(e^x\cos(y),e^x\sin(y)\bigr)\text{.}$$

1. Show that $$f$$ is onto.

2. Show that $$f'$$ is invertible at all points.

3. Show that $$f$$ is not one-to-one, in fact for every $$(a,b) \in \R^2 \setminus \bigl\{ (0,0) \bigr\}\text{,}$$ there exist infinitely many different points $$(x,y) \in \R^2$$ such that $$f(x,y) = (a,b)\text{.}$$

Therefore, invertible derivative at every point does not mean that $$f$$ is invertible globally.
Note: Feel free to use what you know about sine and cosine from calculus.

#### Exercise8.5.4.

Find a map $$f \colon \R^n \to \R^n$$ that is one-to-one, onto, continuously differentiable, but $$f'(0) = 0\text{.}$$ Hint: Generalize $$f(x) = x^3$$ from one to $$n$$ dimensions.

#### Exercise8.5.5.

Consider $$z^2 + xz + y =0$$ in $$\R^3\text{.}$$ Find an equation $$D(x,y)=0\text{,}$$ such that if $$D(x_0,y_0) \not= 0$$ and $$z^2+x_0z+y_0 = 0$$ for some $$z \in \R\text{,}$$ then for points near $$(x_0,y_0)$$ there exist exactly two distinct continuously differentiable functions $$r_1(x,y)$$ and $$r_2(x,y)$$ such that $$z=r_1(x,y)$$ and $$z=r_2(x,y)$$ solve $$z^2 + xz + y =0\text{.}$$ Do you recognize the expression $$D$$ from algebra?

#### Exercise8.5.6.

Suppose $$f \colon (a,b) \to \R^2$$ is continuously differentiable and the first component (the $$x$$ component) of $$\nabla f(t)$$ is not equal to 0 for all $$t \in (a,b)\text{.}$$ Prove that there exists an interval $$(c,d)$$ and a continuously differentiable function $$g \colon (c,d) \to \R$$ such that $$(x,y) \in f\bigl((a,b)\bigr)$$ if and only if $$x \in (c,d)$$ and $$y=g(x)\text{.}$$ In other words, the set $$f\bigl((a,b)\bigr)$$ is a graph of $$g\text{.}$$

#### Exercise8.5.7.

Define $$f \colon \R^2 \to \R^2$$

\begin{equation*} f(x,y) := \begin{cases} \bigl(x^2 \sin (\nicefrac{1}{x}) + \nicefrac{x}{2} , y \bigr) & \text{if } x \not= 0, \\ (0,y) & \text{if } x=0. \end{cases} \end{equation*}
1. Show that $$f$$ is differentiable everywhere.

2. Show that $$f'(0,0)$$ is invertible.

3. Show that $$f$$ is not one-to-one in every neighborhood of the origin (it is not locally invertible, that is, the inverse function theorem does not work).

4. Show that $$f$$ is not continuously differentiable.

Note: Feel free to use what you know about sine and cosine from calculus.

#### Exercise8.5.8.

(Polar coordinates)   Define a mapping $$F(r,\theta) := \bigl(r \cos(\theta), r \sin(\theta) \bigr)\text{.}$$

1. Show that $$F$$ is continuously differentiable (for all $$(r,\theta) \in \R^2$$).

2. Compute $$F'(0,\theta)$$ for all $$\theta\text{.}$$

3. Show that if $$r \not= 0\text{,}$$ then $$F'(r,\theta)$$ is invertible, therefore an inverse of $$F$$ exists locally as long as $$r \not= 0\text{.}$$

4. Show that $$F \colon \R^2 \to \R^2$$ is onto, and for each point $$(x,y) \in \R^2\text{,}$$ the set $$F^{-1}(x,y)$$ is infinite.

5. Show that $$F \colon \R^2 \to \R^2$$ is an open map, despite not satisfying the condition of the inverse function theorem.

6. Show that $$F|_{(0,\infty) \times [0,2\pi)}$$ is one-to-one and onto $$\R^2 \setminus \bigl\{ (0,0) \bigr\}\text{.}$$

Note: Feel free to use what you know about sine and cosine from calculus.

#### Exercise8.5.9.

Let $$H := \bigl\{ (x,y) \in \R^2 : y > 0 \}\text{,}$$ and for $$(x,y) \in H$$ define

\begin{equation*} F(x,y) := \left( \frac{x^2+y^2-1}{x^2+2y+y^2+1} ,~ \frac{-2x}{x^2+2y+y^2+1} \right) . \end{equation*}

Prove that $$F$$ is a bijective mapping from $$H$$ to $$B(0,1)\text{,}$$ it is continuously differentiable on $$H\text{,}$$ and its inverse is also continuously differentiable.

#### Exercise8.5.10.

Suppose $$U \subset \R^2$$ is open and $$f \colon U \to \R$$ is a $$C^1$$ function such that $$\nabla f(x,y) \not= 0$$ for all $$(x,y) \in U\text{.}$$ Show that every level set is a $$C^1$$ smooth curve. That is, for every $$(x,y) \in U\text{,}$$ there exists a $$C^1$$ function $$\gamma \colon (-\delta,\delta) \to \R^2$$ with $$\gamma^{\:\prime}(0) \not= 0$$ such that $$f\bigl(\gamma(t)\bigr)$$ is constant for all $$t \in (-\delta,\delta)\text{.}$$

#### Exercise8.5.11.

Suppose $$U \subset \R^2$$ is open and $$f \colon U \to \R$$ is a $$C^1$$ function such that $$\nabla f(x,y) \not= 0$$ for all $$(x,y) \in U\text{.}$$ Show that for every $$(x,y)$$ there exists a neighborhood $$V$$ of $$(x,y)$$ an open set $$W \subset \R^2\text{,}$$ a bijective $$C^1$$ function with a $$C^1$$ inverse $$g \colon W \to V$$ such that the level sets of $$f \circ g$$ are horizontal lines in $$W\text{,}$$ that is, the set given by $$(f \circ g) (s,t) = c$$ for a constant $$c$$ is a set of the form $$\bigl\{ (s,t_0) \in \R^2 : s \in \R, (s,t_0) \in W \bigr\}\text{,}$$ where $$t_0$$ is fixed. That is, the level curves can be locally “straightened.”

For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf