As we said, the general first-order equation we are studying looks like
\begin{equation*}
y' = f(x,y).
\end{equation*}
Frequently, we cannot simply solve these kinds of equations explicitly. It would be nice if we could at least figure out the shape and behavior of the solutions, or find approximate solutions.
The equation \(y' = f(x,y)\) gives you a slope at each point in the \((x,y)\)-plane. And this is the slope a solution \(y(x)\) would have at \(x\) if its value was \(y\text{.}\) In other words, \(f(x,y)\) is the slope of a solution whose graph runs through the point \((x,y)\text{.}\) At a point \((x,y)\text{,}\) we draw a short line with the slope \(f(x,y)\text{.}\) For example, if \(f(x,y) = xy\text{,}\) then at point \((2,1.5)\) we draw a short line of slope \(xy = 2 \times 1.5 = 3\text{.}\) If \(y(x)\) is a solution and \(y(2) = 1.5\text{,}\) then the equation mandates that \(y'(2) = 3\text{.}\) See Figure 1.2.
To get an idea of how solutions behave, we draw such lines at lots of points in the plane, not just the point \((2,1.5)\text{.}\) We would ideally want to see the slope at every point, but that is just not possible. Usually we pick a grid of points fine enough so that it shows the behavior, but not too fine so that we can still recognize the individual lines. We call this picture the slope field of the equation. See Figure 1.3 for the slope field of the equation \(y' = xy\text{.}\) In practice, one does not do this by hand, a computer can do the drawing.
Suppose we are given a specific initial condition \(y(x_0) = y_0\text{.}\) A solution, that is, the graph of the solution, would be a curve that follows the slopes we drew. For a few sample solutions, see Figure 1.4. It is easy to roughly sketch (or at least imagine) possible solutions in the slope field, just from looking at the slope field itself. You simply sketch a line that roughly fits the little line segments and goes through your initial condition.
By looking at the slope field we get a lot of information about the behavior of solutions without having to solve the equation. For example, in Figure 1.4 we see what the solutions do when the initial conditions are \(y(0) > 0\text{,}\)\(y(0) = 0\) and \(y(0) < 0\text{.}\) A small change in the initial condition causes quite different behavior. We see this behavior just from the slope field and imagining what solutions ought to do.
We see a different behavior for the equation \(y' = -y\text{.}\) The slope field and a few solutions is in see Figure 1.5. If we think of moving from left to right (perhaps \(x\) is time and time is usually increasing), then we see that no matter what \(y(0)\) is, all solutions tend to zero as \(x\) tends to infinity. Again that behavior is clear from simply looking at the slope field itself.
What do you think is the answer? The answer to both questions seems to be yes, does it not? Well, it really is yes most of the time. But there are cases when the answer to either question can be no.
Since the equations we encounter in applications come from real life situations, it seems logical that a solution always exists. It also has to be unique if we believe our universe is deterministic. If the solution does not exist, or if it is not unique, we have probably not devised the correct model. Hence, it is good to know when things go wrong and why.
Integrate to find the general solution \(y = \ln \, \lvert x \rvert + C\text{.}\) The solution does not exist at \(x=0\text{.}\) See Figure 1.6. You may say one can see the division by zero a mile away, but the equation may have been written as the seemingly harmless \(x y' = 1\text{.}\)
It is hard to tell by staring at the slope field that the solution is not unique. Is there any hope? Of course there is. We have the following theorem, known as Picard’s theorem 1
Theorem1.2.1.Picard’s theorem on existence and uniqueness.
If \(f(x,y)\) is continuous (as a function of two variables) and \(\frac{\partial f}{\partial y}\) exists and is continuous near some \((x_0,y_0)\text{,}\) then a solution to
Note that the problems \(y' = \nicefrac{1}{x}\text{,}\)\(y(0) = 0\) and \(y' = 2 \sqrt{\lvert y \rvert}\text{,}\)\(y(0) = 0\) do not satisfy the hypothesis of the theorem. Even if we can use the theorem, we ought to be careful about this existence business. It is quite possible that the solution only exists for a short while.
We know how to solve this equation. First assume that \(A \not= 0\text{,}\) so \(y\) is not equal to zero at least for some \(x\) near 0. So \(x' = \nicefrac{1}{y^2}\text{,}\) so \(x = \nicefrac{-1}{y} + C\text{,}\) so \(y = \frac{1}{C-x}\text{.}\) If \(y(0) = A\text{,}\) then \(C = \nicefrac{1}{A}\) so
\begin{equation*}
y = \frac{1}{\nicefrac{1}{A} - x} .
\end{equation*}
For example, when \(A=1\) the solution “blows up” at \(x=1\text{.}\) Hence, the solution does not exist for all \(x\) even if the equation itself is nice everywhere. The equation \(y' = y^2\) certainly looks nice.
For most of this course, we will be interested in equations where existence and uniqueness hold, and in fact hold “globally” unlike for the equation \(y'=y^2\text{.}\)
Sketch slope field for \(y'=e^{x-y}\text{.}\) How do the solutions behave as \(x\) grows? Can you guess a particular solution by looking at the slope field?
(challenging) Take \(y' = f(x,y)\text{,}\)\(y(0) = 0\text{,}\) where \(f(x,y) > 1\) for all \(x\) and \(y\text{.}\) If the solution exists for all \(x\text{,}\) can you say what happens to \(y(x)\) as \(x\) goes to positive infinity? Explain.
Take an equation \(y' = (y-2x) g(x,y) + 2\) for some function \(g(x,y)\text{.}\) Can you solve the problem for the initial condition \(y(0) = 0\text{,}\) and if so what is the solution?
(challenging) Suppose \(y' = f(x,y)\) is such that \(f(x,1) = 0\) for every \(x\text{,}\)\(f\) is continuous and \(\frac{\partial f}{\partial y}\) exists and is continuous for every \(x\) and \(y\text{.}\)
Given \(y(0) = 0\text{,}\) what can you say about the solution. In particular, can \(y(x) > 1\) for any \(x\text{?}\) Can \(y(x) = 1\) for any \(x\text{?}\) Why or why not?
Yes a solution exists. The equation is \(y' = f(x,y)\) where \(f(x,y) = xy\text{.}\) The function \(f(x,y)\) is continuous and \(\frac{\partial f}{\partial y} = x\text{,}\) which is also continuous near \((0,0)\text{.}\) So a solution exists and is unique. (In fact, \(y=0\) is the solution.)
Picard does not apply as \(f\) is not continuous at \(y=0\text{.}\) The equation does not have a continuously differentiable solution. Suppose it did. Notice that \(y'(0) = 1\text{.}\) By the first derivative test, \(y(x) > 0\) for small positive \(x\text{.}\) But then for those \(x\text{,}\) we have \(y'(x) = f\bigl(y(x)\bigr) = 0\text{.}\) It is not possible for \(y'\) to be continous, \(y'(0)=1\) and \(y'(x) = 0\) for arbitrarily small positive \(x\text{.}\)
Consider an equation of the form \(y' = f(x)\) for some continuous function \(f\text{,}\) and an initial condition \(y(x_0) = y_0\text{.}\) Does a solution exist for all \(x\text{?}\) Why or why not?