Skip to main content
\(\require{cancel}\newcommand{\nicefrac}[2]{{{}^{#1}}\!/\!{{}_{#2}}} \newcommand{\unitfrac}[3][\!\!]{#1 \,\, {{}^{#2}}\!/\!{{}_{#3}}} \newcommand{\unit}[2][\!\!]{#1 \,\, #2} \newcommand{\noalign}[1]{} \newcommand{\qed}{\qquad \Box} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section4.8D'Alembert solution of the wave equation

1 lecture, different from §9.6 in [EP], part of §10.7 in [BD]

We have solved the wave equation by using Fourier series. But it is often more convenient to use the so-called d'Alembert solution to the wave equation 1 Named after the French mathematician Jean le Rond d'Alembert (1717–1783).. While this solution can be derived using Fourier series as well, it is really an awkward use of those concepts. It is easier and more instructive to derive this solution by making a correct change of variables to get an equation that can be solved by simple integration.

Suppose we have the wave equation

\begin{equation} y_{tt} = a^2 y_{xx} .\label{dalemb_weq}\tag{14} \end{equation}

We wish to solve the equation (14) given the conditions

\begin{equation} \begin{aligned} y(0,t) &= y(L,t) = 0 & & \text{for all } t , \\ y(x,0) &= f(x) & & 0 < x < L , \\ y_t(x,0) &= g(x) & & 0 < x < L . \end{aligned}\label{dalemb_weqside}\tag{15} \end{equation}

Subsection4.8.1Change of variables

We will transform the equation into a simpler form where it can be solved by simple integration. We change variables to \(\xi = x - at\text{,}\) \(\eta = x + at\text{.}\) The chain rule says:

\begin{equation*} \begin{aligned} & \frac{\partial}{\partial x} = \frac{\partial \xi}{\partial x} \frac{\partial}{\partial \xi} + \frac{\partial \eta}{\partial x} \frac{\partial}{\partial \eta} = \frac{\partial}{\partial \xi} + \frac{\partial}{\partial \eta} , \\ & \frac{\partial}{\partial t} = \frac{\partial \xi}{\partial t} \frac{\partial}{\partial \xi} + \frac{\partial \eta}{\partial t} \frac{\partial}{\partial \eta} = -a \frac{\partial}{\partial \xi} + a \frac{\partial}{\partial \eta} . \end{aligned} \end{equation*}

We compute

\begin{equation*} \begin{aligned} & y_{xx} = \frac{\partial^2 y}{\partial x^2} = \left( \frac{\partial}{\partial \xi} + \frac{\partial}{\partial \eta} \right) \left( \frac{\partial y}{\partial \xi} + \frac{\partial y}{\partial \eta} \right) = \frac{\partial^2 y}{\partial \xi^2} + 2 \frac{\partial^2 y}{\partial \xi \partial \eta} + \frac{\partial^2 y}{\partial \eta^2} , \\ & y_{tt} = \frac{\partial^2 y}{\partial t^2} = \left( -a \frac{\partial}{\partial \xi} + a \frac{\partial}{\partial \eta} \right) \left( -a \frac{\partial y}{\partial \xi} + a \frac{\partial y}{\partial \eta} \right) = a^2 \frac{\partial^2 y}{\partial \xi^2} - 2 a^2 \frac{\partial^2 y}{\partial \xi \partial \eta} + a^2 \frac{\partial^2 y}{\partial \eta^2} . \end{aligned} \end{equation*}

In the above computations, we used the fact from calculus that \(\frac{\partial^2 y}{\partial \xi \partial \eta} = \frac{\partial^2 y}{\partial \eta \partial \xi}\text{.}\) We plug what we got into the wave equation,

\begin{equation*} 0 = a^2 y_{xx} - y_{tt} = 4 a^2 \frac{\partial^2 y}{\partial \xi \partial \eta} = 4 a^2 y_{\xi\eta} . \end{equation*}

Therefore, the wave equation (14) transforms into \(y_{\xi\eta} = 0\text{.}\) It is easy to find the general solution to this equation by integrating twice. Keeping \(\xi\) constant, we integrate with respect to \(\eta\) first 2 We can just as well integrate with \(\xi\) first, if we wish. and notice that the constant of integration depends on \(\xi\text{;}\) for each \(\xi\) we might get a different constant of integration. We get \(y_{\xi} = C(\xi)\text{.}\) Next, we integrate with respect to \(\xi\) and notice that the constant of integration depends on \(\eta\text{.}\) Thus, \(y = \int C(\xi) ~ d\xi + B(\eta)\text{.}\) The solution must, therefore, be of the following form for some functions \(A(\xi)\) and \(B(\eta)\text{:}\)

\begin{equation*} y = A(\xi) + B(\eta) = A(x-at) + B(x+at) . \end{equation*}

The solution is a superposition of two functions (waves) traveling at speed \(a\) in opposite directions. The coordinates \(\xi\) and \(\eta\) are called the characteristic coordinates, and a similar technique can be applied to more complicated hyperbolic PDE.

Subsection4.8.2D'Alembert's formula

We know what any solution must look like, but we need to solve for the given side conditions. We will just give the formula and see that it works. First let \(F(x)\) denote the odd periodic extension of \(f(x)\text{,}\) and let \(G(x)\) denote the odd periodic extension of \(g(x)\text{.}\) Define

\begin{equation*} A(x) = \frac{1}{2} F(x) - \frac{1}{2a} \int_0^x G(s) ~ds , \qquad B(x) = \frac{1}{2} F(x) + \frac{1}{2a} \int_0^x G(s) ~ds . \end{equation*}

We claim this \(A(x)\) and \(B(x)\) give the solution. Explicitly, the solution is \(y(x,t) = A(x-at) + B(x+at)\) or in other words:

\begin{equation} \boxed{~~ \begin{aligned} y(x,t) & = \frac{1}{2} F(x-at) - \frac{1}{2a} \int_0^{x-at} G(s) ~ds + \frac{1}{2} F(x+at) + \frac{1}{2a} \int_0^{x+at} G(s) ~ds \\ & = \frac{F(x-at) + F(x+at)}{2} + \frac{1}{2a} \int_{x-at}^{x+at} G(s) ~ds . \end{aligned} ~~}\label{dalemb_form}\tag{16} \end{equation}

Let us check that the d'Alembert formula really works.

\begin{equation*} y(x,0) = \frac{1}{2} F(x) - \frac{1}{2a} \int_0^{x} G(s) ~ds + \frac{1}{2} F(x) + \frac{1}{2a} \int_0^{x} G(s) ~ds = F(x) . \end{equation*}

So far so good. Assume for simplicity \(F\) is differentiable. By the fundamental theorem of calculus we have

\begin{equation*} y_t(x,t) = \frac{-a}{2} F'(x-at) + \frac{1}{2} G(x-at) + \frac{a}{2} F'(x+at) + \frac{1}{2} G(x+at) . \end{equation*}


\begin{equation*} y_t(x,0) = \frac{-a}{2} F'(x) + \frac{1}{2} G(x) + \frac{a}{2} F'(x) + \frac{1}{2} G(x) = G(x) . \end{equation*}

Yay! We're smoking now. OK, now the boundary conditions. Note that \(F(x)\) and \(G(x)\) are odd. Also \(\int_0^x G(s) ~ds\) is an even function of \(x\) because \(G(x)\) is odd (to see this fact, do the substitution \(s=-v\)). So

\begin{equation*} \begin{split} y(0,t) & = \frac{1}{2} F(-at) - \frac{1}{2a} \int_0^{-at} G(s) ~ds + \frac{1}{2} F(at) + \frac{1}{2a} \int_0^{at} G(s) ~ds \\ & = \frac{-1}{2} F(at) - \frac{1}{2a} \int_0^{at} G(s) ~ds + \frac{1}{2} F(at) + \frac{1}{2a} \int_0^{at} G(s) ~ds = 0 . \end{split} \end{equation*}

Note that \(F(x)\) and \(G(x)\) are \(2L\) periodic. We compute

\begin{equation*} \begin{split} y(L,t) & = \frac{1}{2} F(L-at) - \frac{1}{2a} \int_0^{L-at} G(s) ~ds + \frac{1}{2} F(L+at) + \frac{1}{2a} \int_0^{L+at} G(s) ~ds \\ & = \frac{1}{2} F(-L-at) - \frac{1}{2a} \int_0^{L} G(s) ~ds - \frac{1}{2a} \int_0^{-at} G(s) ~ds ~+ \\ & \qquad + \frac{1}{2} F(L+at) + \frac{1}{2a} \int_0^{L} G(s) ~ds + \frac{1}{2a} \int_0^{at} G(s) ~ds \\ & = \frac{-1}{2} F(L+at) - \frac{1}{2a} \int_0^{at} G(s) ~ds + \frac{1}{2} F(L+at) + \frac{1}{2a} \int_0^{at} G(s) ~ds =0 . \end{split} \end{equation*}

And voilà, it works.


D'Alembert says that the solution is a superposition of two functions (waves) moving in the opposite direction at “speed” \(a\text{.}\) To get an idea of how it works, let us work out an example. Consider the simpler setup

\begin{equation*} \begin{aligned} & y_{tt} = y_{xx} , \\ & y(0,t) = y(1,t) = 0 , \\ & y(x,0) = f(x) , \\ & y_t(x,0) = 0 . \end{aligned} \end{equation*}

Here \(f(x)\) is an impulse of height 1 centered at \(x=0.5\text{:}\)

\begin{equation*} f(x) = \begin{cases} 0 & \text{if } \; \phantom{0.5}0 \leq x < 0.45, \\ 20\,(x-0.45) & \text{if } \; 0.45 \leq x < 0.5, \\ 20\,(0.55-x) & \text{if } \; \phantom{5}0.5 \leq x < 0.55, \\ 0 & \text{if } \; 0.55 \leq x \leq 1 . \end{cases} \end{equation*}

The graph of this impulse is the top left plot in Figure 5.8.2.

Let \(F(x)\) be the odd periodic extension of \(f(x)\text{.}\) Then from (16) we know that the solution is given as

\begin{equation*} y(x,t) = \frac{F(x-t) + F(x+t)}{2} . \end{equation*}

It is not hard to compute specific values of \(y(x,t)\text{.}\) For example, to compute \(y(0.1,0.6)\) we notice \(x-t = -0.5\) and \(x+t = 0.7\text{.}\) Now \(F(-0.5) = -f(0.5) = - 20\,(0.55 - 0.5) = -1\) and \(F(0.7) = f(0.7) = 0\text{.}\) Hence \(y(0.1,0.6) = \frac{-1 + 0}{2} = -0.5\text{.}\) As you can see the d'Alembert solution is much easier to actually compute and to plot than the Fourier series solution. See Figure 5.8.2 for plots of the solution \(y\) for several different \(t\text{.}\)

Figure5.8.2Plot of the d'Alembert solution for \(t=0\text{,}\) \(t=0.2\text{,}\) \(t=0.4\text{,}\) and \(t=0.6\text{.}\)

Subsection4.8.3Another way to solve for the side conditions

It is perhaps easier and more useful to memorize the procedure rather than the formula itself. The important thing to remember is that a solution to the wave equation is a superposition of two waves traveling in opposite directions. That is,

\begin{equation*} y(x,t) = A(x-at) + B(x+at) . \end{equation*}

If you think about it, the exact formulas for \(A\) and \(B\) are not hard to guess once you realize what kind of side conditions \(y(x,t)\) is supposed to satisfy. Let us give the formula again, but slightly differently. Best approach is to do this in stages. When \(g(x) = 0\) (and hence \(G(x) = 0\)) we have the solution

\begin{equation*} \frac{ F(x-at) + F(x+at) }{2} . \end{equation*}

On the other hand, when \(f(x) = 0\) (and hence \(F(x) = 0\)), we let

\begin{equation*} H(x) = \int_0^x G(s) ~ds . \end{equation*}

The solution in this case is

\begin{equation*} \frac{1}{2a} \int_{x-at}^{x+at} G(s) ~ds = \frac{ -H(x-at) + H(x+at) }{2a} . \end{equation*}

By superposition we get a solution for the general side conditions (15) (when neither \(f(x)\) nor \(g(x)\) are identically zero).

\begin{equation} y(x,t) = \frac{ F(x-at) + F(x+at) }{2} + \frac{ -H(x-at) + H(x+at) }{2a} .\label{dalemb_altform}\tag{17} \end{equation}

Do note the minus sign before the \(H\text{,}\) and the \(a\) in the second denominator.


Check that the new formula (17) satisfies the side conditions (15).

Warning: Make sure you use the odd periodic extensions \(F(x)\) and \(G(x)\text{,}\) when you have formulas for \(f(x)\) and \(g(x)\text{.}\) The thing is, those formulas in general hold only for \(0 < x < L\text{,}\) and are not usually equal to \(F(x)\) and \(G(x)\) for other \(x\text{.}\)



Using the d'Alembert solution solve \(y_{tt} = 4y_{xx}\text{,}\) \(0 < x < \pi\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(\pi, t) = 0\text{,}\) \(y(x,0) = \sin x\text{,}\) and \(y_t(x,0) = \sin x\text{.}\) Hint: Note that \(\sin x\) is the odd periodic extension of \(y(x,0)\) and \(y_t(x,0)\text{.}\)


Using the d'Alembert solution solve \(y_{tt} = 2y_{xx}\text{,}\) \(0 < x < 1\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(1, t) = 0\text{,}\) \(y(x,0) = \sin^5 (\pi x)\text{,}\) and \(y_t(x,0) = \sin^3 (\pi x)\text{.}\)


Take \(y_{tt} = 4y_{xx}\text{,}\) \(0 < x < \pi\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(\pi, t) = 0\text{,}\) \(y(x,0) = x(\pi-x)\text{,}\) and \(y_t(x,0) = 0\text{.}\) a) Solve using the d'Alembert formula. Hint: You can use the sine series for \(y(x,0)\text{.}\) b) Find the solution as a function of \(x\) for a fixed \(t=0.5\text{,}\) \(t=1\text{,}\) and \(t=2\text{.}\) Do not use the sine series here.


Derive the d'Alembert solution for \(y_{tt} = a^2 y_{xx}\text{,}\) \(0 < x < \pi\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(\pi, t) = 0\text{,}\) \(y(x,0) = f(x)\text{,}\) and \(y_t(x,0) = 0\text{,}\) using the Fourier series solution of the wave equation, by applying an appropriate trigonometric identity.


The d'Alembert solution still works if there are no boundary conditions and the initial condition is defined on the whole real line. Suppose that \(y_{tt} = y_{xx}\) (for all \(x\) on the real line and \(t \geq 0\)), \(y(x,0) = f(x)\text{,}\) and \(y_t(x,0) = 0\text{,}\) where

\begin{equation*} f(x) = \begin{cases} 0 & \text{if } \; \phantom{{-1} \leq {} }x < -1, \\ x+1 & \text{if } \; {-1} \leq x < 0, \\ -x+1 & \text{if } \; \phantom{-}0 \leq x < 1, \\ 0 & \text{if } \; \phantom{-}1 < x . \end{cases} \end{equation*}

Solve using the d'Alembert solution. That is, write down a piecewise definition for the solution. Then sketch the solution for \(t=0\text{,}\) \(t=\nicefrac{1}{2}\text{,}\) \(t=1\text{,}\) and \(t=2\text{.}\)


Using the d'Alembert solution solve \(y_{tt} = 9y_{xx}\text{,}\) \(0 < x < 1\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(1, t) = 0\text{,}\) \(y(x,0) = \sin (2 \pi x)\text{,}\) and \(y_t(x,0) = \sin (3 \pi x)\text{.}\)


\(y(x,t)= \frac{\sin(2 \pi (x-3 t))+\sin(2 \pi (3 t+x))}{2} + \frac{\cos(3 \pi (x-3 t))-\cos(3 \pi (3 t+x))}{18\pi}\)


Take \(y_{tt} = 4y_{xx}\text{,}\) \(0 < x < 1\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(1, t) = 0\text{,}\) \(y(x,0) = x-x^2\text{,}\) and \(y_t(x,0) = 0\text{.}\) Using the D'Alembert solution find the solution at a) \(t=0.1\text{,}\) b) \(t=\nicefrac{1}{2}\text{,}\) c) \(t=1\text{.}\) You may have to split your answer up by cases.


a) \(y(x,0.1) = \begin{cases} x-x^2-0.04 & \text{if } \; 0.2 \leq x \leq 0.8 \\ 0.6x & \text{if } \; x \leq 0.2 \\ 0.6-0.6x & \text{if } \; x \geq 0.8 \\ \end{cases}\)
b) \(y(x,\nicefrac{1}{2}) = -x+x^2\)     c) \(y(x,1) = x-x^2\)


Take \(y_{tt} = 100y_{xx}\text{,}\) \(0 < x < 4\text{,}\) \(t > 0\text{,}\) \(y(0,t) = y(4, t) = 0\text{,}\) \(y(x,0) = F(x)\text{,}\) and \(y_t(x,0) = 0\text{.}\) Suppose that \(F(0)=0\text{,}\) \(F(1)=2\text{,}\) \(F(2)=3\text{,}\) \(F(3)=1\text{.}\) Using the D'Alembert solution find a) \(y(1,1)\text{,}\) b) \(y(4,3)\text{,}\) c) \(y(3,9)\text{.}\)


a) \(y(1,1) = -\nicefrac{1}{2}\)     b) \(y(4,3) = 0\)     c) \(y(3,9) = \nicefrac{1}{2}\)

For a higher quality printout use the PDF version: