We need a definition of continuity in two variables. A point in the plane \(\R^2 = \R \times \R\) is denoted by an ordered pair \((x,y)\text{.}\) For simplicity, we give the following sequential definition of continuity.
Proof.
Suppose we could find a solution
\(f\text{.}\) Using the fundamental theorem of calculus we integrate the equation
\(f'(x) = F\bigl(x,f(x)\bigr)\text{,}\) \(f(x_0) = y_0\text{,}\) and write
(6.1) as the integral equation
\begin{equation}
f(x) = y_0 + \int_{x_0}^x F\bigl(t,f(t)\bigr)\,dt .\tag{6.2}
\end{equation}
The idea of our proof is that we try to plug in approximations to a solution to the righthand side of
(6.2) to get better approximations on the lefthand side of
(6.2). We hope that in the end the sequence converges and solves
(6.2) and hence
(6.1). The technique below is called
Picard iteration, and the individual functions
\(f_k\) are called the
Picard iterates.
Without loss of generality, suppose \(x_0 = 0\) (exercise below). Another exercise tells us that \(F\) is bounded as it is continuous. Therefore pick some \(M > 0\) so that \(\abs{F(x,y)} \leq M\) for all \((x,y) \in I\times J\text{.}\) Pick \(\alpha > 0\) such that \([\alpha,\alpha] \subset I\) and \([y_0\alpha, y_0 + \alpha] \subset J\text{.}\) Define
\begin{equation*}
h \coloneqq \min \left\{ \alpha, \frac{\alpha}{M+L\alpha} \right\} .
\end{equation*}
Observe \([h,h] \subset I\text{.}\)
Set \(f_0(x) \coloneqq y_0\text{.}\) We define \(f_k\) inductively. Assuming \(f_{k1}([h,h]) \subset [y_0\alpha,y_0+\alpha]\text{,}\) we see \(F\bigl(t,f_{k1}(t)\bigr)\) is a welldefined function of \(t\) for \(t \in [h,h]\text{.}\) Further if \(f_{k1}\) is continuous on \([h,h]\text{,}\) then \(F\bigl(t,f_{k1}(t)\bigr)\) is continuous as a function of \(t\) on \([h,h]\) (left as an exercise). Define
\begin{equation*}
f_k(x) \coloneqq y_0+ \int_{0}^x F\bigl(t,f_{k1}(t)\bigr)\,dt ,
\end{equation*}
and \(f_k\) is continuous on \([h,h]\) by the fundamental theorem of calculus. To see that \(f_k\) maps \([h,h]\) to \([y_0\alpha,y_0+\alpha]\text{,}\) we compute for \(x \in [h,h]\)
\begin{equation*}
\abs{f_k(x)  y_0} =
\abs{\int_{0}^x F\bigl(t,f_{k1}(t)\bigr)\,dt }
\leq
M\abs{x}
\leq
Mh
\leq
M
\frac{\alpha}{M+L\alpha}
\leq \alpha .
\end{equation*}
We next define
\(f_{k+1}\) using
\(f_k\) and so on. Thus we have inductively defined a sequence
\(\{ f_k \}_{k=1}^\infty\) of functions. We need to show that it converges to a function
\(f\) that solves the equation
(6.2) and therefore
(6.1).
We wish to show that the sequence \(\{ f_k \}_{k=1}^\infty\) converges uniformly to some function on \([h,h]\text{.}\) First, for \(t \in [h,h]\text{,}\) we have the following useful bound
\begin{equation*}
\abs{F\bigl(t,f_{n}(t)\bigr) 
F\bigl(t,f_{k}(t)\bigr)}
\leq
L \abs{f_n(t)f_k(t)}
\leq
L \norm{f_nf_k}_{[h,h]} ,
\end{equation*}
where \(\norm{f_nf_k}_{[h,h]}\) is the uniform norm, that is the supremum of \(\abs{f_n(t)f_k(t)}\) for \(t \in [h,h]\text{.}\) Now note that \(\abs{x} \leq h \leq \frac{\alpha}{M+L\alpha}\text{.}\) Therefore
\begin{equation*}
\begin{split}
\abs{f_n(x)  f_k(x)}
& =
\abs{\int_{0}^x F\bigl(t,f_{n1}(t)\bigr)\,dt

\int_{0}^x F\bigl(t,f_{k1}(t)\bigr)\,dt}
\\
& =
\abs{\int_{0}^x
\Bigl(
F\bigl(t,f_{n1}(t)\bigr)
F\bigl(t,f_{k1}(t)\bigr)
\Bigr)
\,dt}
\\
& \leq
L\norm{f_{n1}f_{k1}}_{[h,h]}
\abs{x}
\\
& \leq
\frac{L\alpha}{M+L\alpha}
\norm{f_{n1}f_{k1}}_{[h,h]} .
\end{split}
\end{equation*}
Let \(C \coloneqq \frac{L\alpha}{M+L\alpha}\) and note that \(C < 1\text{.}\) Taking supremum on the lefthand side we get
\begin{equation*}
\norm{f_nf_k}_{[h,h]} \leq C \norm{f_{n1}f_{k1}}_{[h,h]} .
\end{equation*}
Without loss of generality, suppose
\(n \geq k\text{.}\) Then by
induction we can show
\begin{equation*}
\norm{f_nf_k}_{[h,h]} \leq C^{k} \norm{f_{nk}f_{0}}_{[h,h]} .
\end{equation*}
For \(x \in [h,h]\text{,}\) we have
\begin{equation*}
\abs{f_{nk}(x)f_{0}(x)}
=
\abs{f_{nk}(x)y_0}
\leq \alpha .
\end{equation*}
Therefore,
\begin{equation*}
\norm{f_nf_k}_{[h,h]} \leq C^{k} \norm{f_{nk}f_{0}}_{[h,h]} \leq C^{k} \alpha .
\end{equation*}
As
\(C < 1\text{,}\) \(\{f_n\}_{n=1}^\infty\) is uniformly Cauchy and by
Proposition 6.1.13 we obtain that
\(\{ f_n \}_{n=1}^\infty\) converges uniformly on
\([h,h]\) to some function
\(f \colon [h,h] \to \R\text{.}\) The function
\(f\) is the uniform limit of continuous functions and therefore continuous. Furthermore, since
\(f_n\bigl([h,h]\bigr) \subset
[y_0\alpha,y_0+\alpha]\) for all
\(n\text{,}\) then
\(f\bigl([h,h]\bigr) \subset [y_0\alpha,y_0+\alpha]\) (why?).
We now need to show that
\(f\) solves
(6.2). First, as before we notice
\begin{equation*}
\abs{F\bigl(t,f_{n}(t)\bigr) 
F\bigl(t,f(t)\bigr)}
\leq
L \abs{f_n(t)f(t)}
\leq
L \norm{f_nf}_{[h,h]} .
\end{equation*}
As \(\norm{f_nf}_{[h,h]}\) converges to 0, then \(F\bigl(t,f_n(t)\bigr)\) converges uniformly to \(F\bigl(t,f(t)\bigr)\) for \(t \in [h,h]\text{.}\) Hence, for \(x \in [h,h]\) the convergence is uniform for \(t \in [0,x]\) (or \([x,0]\) if \(x < 0\)). Therefore,
\begin{equation*}
\begin{aligned}
y_0
+
\int_0^{x}
F(t,f(t)\bigr)\,dt
& =
y_0
+
\int_0^{x}
F\bigl(t,\lim_{n\to\infty} f_n(t)\bigr)\,dt
& &
\\
& =
y_0
+
\int_0^{x}
\lim_{n\to\infty} F\bigl(t,f_n(t)\bigr)\,dt
& & \text{(by continuity of } F\text{)}
\\
& =
\lim_{n\to\infty}
\left(
y_0
+
\int_0^{x}
F\bigl(t,f_n(t)\bigr)\,dt
\right)
& & \text{(by uniform convergence)}
\\
& =
\lim_{n\to\infty}
f_{n+1}(x)
=
f(x) .
& &
\end{aligned}
\end{equation*}
We apply the fundamental theorem of calculus (
Theorem 5.3.3) to show that
\(f\) is differentiable and its derivative is
\(F\bigl(x,f(x)\bigr)\text{.}\) It is obvious that
\(f(0) = y_0\text{.}\)
Finally, what is left to do is to show uniqueness. Suppose \(g \colon [h,h]
\to J \subset \R\) is another solution. As before we use the fact that \(\abs{F\bigl(t,f(t)\bigr)  F\bigl(t,g(t)\bigr)} \leq L \norm{fg}_{[h,h]}\text{.}\) Then
\begin{equation*}
\begin{split}
\abs{f(x)g(x)}
& =
\abs{
y_0
+
\int_0^{x}
F\bigl(t,f(t)\bigr)\,dt

\left(
y_0
+
\int_0^{x}
F\bigl(t,g(t)\bigr)\,dt
\right)
}
\\
& =
\abs{
\int_0^{x}
\Bigl(
F\bigl(t,f(t)\bigr)

F\bigl(t,g(t)\bigr)
\Bigr)
\,dt
}
\\
& \leq
L\norm{fg}_{[h,h]}\abs{x}
\leq
Lh\norm{fg}_{[h,h]}
\leq
\frac{L\alpha}{M+L\alpha}\norm{fg}_{[h,h]} .
\end{split}
\end{equation*}
As before, \(C = \frac{L\alpha}{M+L\alpha} < 1\text{.}\) By taking supremum over \(x \in
[h,h]\) on the lefthand side we obtain
\begin{equation*}
\norm{fg}_{[h,h]} \leq C \norm{fg}_{[h,h]} .
\end{equation*}
This is only possible if \(\norm{fg}_{[h,h]} = 0\text{.}\) Therefore, \(f=g\text{,}\) and the solution is unique.