Skip to main content

Section 6.4 Dirac delta and impulse response

Note: 1 or 1.5 lecture, §7.6 in [EP], §6.5 in [BD]

Subsection 6.4.1 Rectangular pulse

Often we study a physical system by putting in a short pulse and then seeing what the system does. The resulting behavior is often called impulse response, and understanding it tells us how the system responds to any input. Let us see what we mean by a pulse. The simplest kind of a pulse is a simple rectangular pulse defined by

\begin{equation*} \varphi(t) = \begin{cases} 0 & \text{if } \; \phantom{a \leq {}} t < a , \\ M & \text{if } \; a \leq t < b , \\ 0 & \text{if } \; b \leq t . \end{cases} \end{equation*}

See Figure 6.3 for a graph.

Figure 6.3. Sample square pulse with \(a=0.5\text{,}\) \(b=1\text{,}\) and \(M = 2\text{.}\)

Notice that

\begin{equation*} \varphi(t) = M \bigl( u(t-a) - u(t-b) \bigr) , \end{equation*}

where \(u(t)\) is the unit step function.

Let us take the Laplace transform of a square pulse,

\begin{equation*} \begin{split} {\mathcal{L}} \bigl\{ \varphi(t) \bigr\} & = {\mathcal{L}} \bigl\{ M \bigl( u(t-a) - u(t-b) \bigr) \bigr\} \\ & = M \frac{e^{-as} - e^{-bs}}{s} . \end{split} \end{equation*}

For simplicity, let \(a=0\text{.}\) It is also convenient to set \(M = \nicefrac{1}{b}\) so that

\begin{equation*} \int_0^\infty \varphi(t) \,dt = 1 . \end{equation*}

That is, to have the pulse have “unit mass.” For such a pulse,

\begin{equation*} {\mathcal{L}} \bigl\{ \varphi(t) \bigr\} = {\mathcal{L}} \left\{ \frac{u(t) - u(t-b)}{b} \right\} = \frac{1 - e^{-bs}}{bs} . \end{equation*}

We want \(b\) to be very small; we wish to have the pulse be very short and very tall. By letting \(b\) go to zero we arrive at the concept of the Dirac delta function.

Subsection 6.4.2 The delta function

The Dirac delta function 1  is not exactly a function; it is sometimes called a generalized function. We avoid unnecessary details and simply say that it is an object that does not really make sense unless we integrate it. The motivation is that we would like a “function” \(\delta(t)\) such that for any continuous function \(f(t)\) we have

\begin{equation*} \mybxbg{~~ \int_{-\infty}^\infty \delta(t) f(t) \,dt = f(0) . ~~} \end{equation*}

The formula should hold if we integrate over any interval that contains 0, not just \((-\infty,\infty)\text{.}\) So \(\delta(t)\) is a “function” with all its “mass” at the single point \(t=0\text{.}\) In other words, for any interval \([c,d]\)

\begin{equation*} \int_c^d \delta(t) \,dt = \begin{cases} 1 & \text{if the interval $[c,d]$ contains 0, i.e.\ } c \leq 0 \leq d, \\ 0 & \text{otherwise.} \end{cases} \end{equation*}

Unfortunately there is no such function in the classical sense. You could informally think that \(\delta(t)\) is zero for \(t\not=0\) and somehow infinite at \(t=0\text{.}\)

A good way to think about \(\delta(t)\) is as a limit of short pulses whose integral is \(1\text{.}\) For example, suppose that we have a square pulse \(\varphi(t)\) as above with \(a=0\text{,}\) \(M=\nicefrac{1}{b}\text{,}\) that is \(\varphi(t) = \frac{u(t) - u(t-b)}{b}\text{.}\) Compute

\begin{equation*} \int_{-\infty}^\infty \varphi(t) f(t) \,dt = \int_{-\infty}^\infty \frac{u(t) - u(t-b)}{b} f(t) \,dt = \frac{1}{b} \int_{0}^b f(t) \,dt . \end{equation*}

If \(f(t)\) is continuous at \(t=0\text{,}\) then for very small \(b\text{,}\) the function \(f(t)\) is approximately equal to \(f(0)\) on the interval \([0,b]\text{.}\) We approximate the integral

\begin{equation*} \frac{1}{b} \int_{0}^b f(t) \,dt \approx \frac{1}{b} \int_{0}^b f(0) \,dt = f(0) . \end{equation*}

Hence,

\begin{equation*} \lim_{b\to 0} \int_{-\infty}^\infty \varphi(t) f(t) \,dt = \lim_{b\to 0} \frac{1}{b} \int_{0}^b f(t) \,dt = f(0) . \end{equation*}

Let us therefore accept \(\delta(t)\) as an object that is possible to integrate. We often want to shift \(\delta\) to another point, for example \(\delta(t-a)\text{.}\) In that case,

\begin{equation*} \int_{-\infty}^\infty \delta(t-a) f(t) \,dt = f(a) . \end{equation*}

Note that \(\delta(a-t)\) is the same object as \(\delta(t-a)\text{.}\) In other words, the convolution of \(\delta(t)\) with \(f(t)\) is again \(f(t)\text{,}\)

\begin{equation*} (f * \delta) (t) = \int_{0}^t \delta(t-s) f(s) \,ds = f(t) . \end{equation*}

As we can integrate \(\delta(t)\text{,}\) we compute its Laplace transform:

\begin{equation*} \mybxbg{~~ {\mathcal{L}} \bigl\{ \delta(t-a) \bigr\} = \int_{0}^\infty e^{-st} \delta(t-a) \,dt = e^{-as} . ~~} \end{equation*}

In particular,

\begin{equation*} {\mathcal{L}} \bigl\{ \delta(t) \bigr\} = 1 . \end{equation*}

Remark 6.4.1.

The Laplace transform of \(\delta(t-a)\) would be the Laplace transform of the derivative of the Heaviside function \(u(t-a)\text{,}\) if the Heaviside function had a derivative. First,

\begin{equation*} {\mathcal{L}} \bigl\{ u(t-a) \bigr\} = \frac{e^{-as}}{s}. \end{equation*}

To obtain what the Laplace transform of the derivative would be we multiply by \(s\text{,}\) to obtain \(e^{-as}\text{,}\) which is the Laplace transform of \(\delta(t-a)\text{.}\) We see the same thing using integration,

\begin{equation*} \int_0^t \delta(s-a)\,ds = u(t-a) . \end{equation*}

So in a certain sense

\begin{equation*} \text{"} \quad \frac{d}{dt} \Bigl[ u(t-a) \Bigr] = \delta(t-a) . \quad \text{"} \end{equation*}

This line of reasoning allows us to talk about derivatives of functions with jump discontinuities. We can think of the derivative of the Heaviside function \(u(t-a)\) as being somehow infinite at \(a\text{,}\) which is precisely our intuitive understanding of the delta function.

Example 6.4.1.

Let us compute \({\mathcal{L}}^{-1} \left\{ \frac{s+1}{s} \right\}\text{.}\) So far we only computed the inverse transform of proper rational functions in the \(s\) variable. That is, the numerator was of lower degree than the denominator. Not so with \(\frac{s+1}{s}\text{.}\) We can use the delta function to compute,

\begin{equation*} {\mathcal{L}}^{-1} \left\{ \frac{s+1}{s} \right\} = {\mathcal{L}}^{-1} \left\{ 1 + \frac{1}{s} \right\} = {\mathcal{L}}^{-1} \{ 1 \} + {\mathcal{L}}^{-1} \left\{ \frac{1}{s} \right\} = \delta(t) + 1 . \end{equation*}

The resulting object is a generalized function and only makes sense when put underneath an integral.

Subsection 6.4.3 Impulse response

As we said before, in the differential equation \(L x = f(t)\text{,}\) we think of \(f(t)\) as input, and \(x(t)\) as the output. We think of the delta function as an impulse, and so to find the response to an impulse, we use the delta function in place of \(f(t)\text{.}\) The solution to

\begin{equation*} L x = \delta(t) \end{equation*}

is called the impulse response.

Example 6.4.2.

Solve (find the impulse response)

\begin{equation} x'' + \omega_0^2 x = \delta(t) , \quad x(0) = 0, \quad x'(0) = 0 .\tag{6.3} \end{equation}

We first apply the Laplace transform to the equation. Denote the transform of \(x(t)\) by \(X(s)\text{.}\)

\begin{equation*} s^2 X(s) + \omega_0^2 X(s) = 1 , \qquad \text{and so} \qquad X(s) = \frac{1}{s^2+ \omega_0^2} . \end{equation*}

The inverse Laplace transform produces

\begin{equation*} x(t) = \frac{\sin (\omega_0 t)}{\omega_0} . \end{equation*}

Let us notice something about the example above. In Example 6.3.4, we found that when the input is \(f(t)\text{,}\) the solution to \(Lx = f(t)\) is given by

\begin{equation*} x(t) = \int_0^t f(\tau) \frac{\sin \bigl( \omega_0 (t-\tau) \bigr)}{\omega_0} ~ d\tau . \end{equation*}

That is, the solution for an arbitrary input is given as convolution with the impulse response. Let us see why. The key is to notice that for functions \(x(t)\) and \(f(t)\text{,}\)

\begin{equation*} (x * f)''(t) = \frac{d^2}{dt^2}\left[ \int_0^t f(\tau) x(t-\tau) ~ d\tau \right] = \int_0^t f(\tau) x''(t-\tau) ~ d\tau = (x'' * f)(t) . \end{equation*}

We simply differentiate twice under the integral 3 , the details are left as an exercise. If we convolve the entire equation (6.3), the left-hand side becomes

\begin{equation*} (x'' + \omega_0^2 x) * f = (x'' * f) + \omega_0^2 (x * f) = (x * f)'' + \omega_0^2 (x * f) . \end{equation*}

The right-hand side becomes

\begin{equation*} (\delta * f)(t) = f(t). \end{equation*}

Therefore \(y(t) = (x * f)(t)\) is the solution to

\begin{equation*} y'' + \omega_0^2 y = f(t) . \end{equation*}

This procedure works in general for other linear equations \(Lx = f(t)\text{.}\) If you determine the impulse response, you also know how to obtain the output \(x(t)\) for any input \(f(t)\) by simply convolving the impulse response and the input \(f(t)\text{.}\)

Subsection 6.4.4 Three-point beam bending

Let us give another quite different example where the delta function turns up: Representing point loads on a steel beam. Consider a beam of length \(L\text{,}\) resting on two simple supports at the ends. Let \(x\) denote the position on the beam, and let \(y(x)\) denote the deflection of the beam in the vertical direction. The deflection \(y(x)\) satisfies the Euler–Bernoulli equation 4 ,

\begin{equation*} EI \frac{d^4 y}{dx^4} = F(x) , \end{equation*}

where \(E\) and \(I\) are constants 8  and \(F(x)\) is the force applied per unit length at position \(x\text{.}\) The situation we are interested in is when the force is applied at a single point as in Figure 6.4.

Figure 6.4. Three-point bending.

The equation becomes

\begin{equation*} EI \frac{d^4 y}{dx^4} = -F \delta(x-a) , \end{equation*}

where \(x=a\) is the point where the mass is applied. The constant \(F\) is the force applied and the minus sign indicates that the force is downward, that is, in the negative \(y\) direction. The end points of the beam satisfy the conditions,

\begin{equation*} \begin{aligned} & y(0) = 0, \qquad y''(0) = 0, \\ & y(L) = 0, \qquad y''(L) = 0. \end{aligned} \end{equation*}

See Section 5.2 for further information about endpoint conditions applied to beams.

Example 6.4.3.

Suppose that length of the beam is 2, and \(EI=1\) for simplicity. Further suppose that the force \(F=1\) is applied at \(x=1\text{.}\) That is, we have the equation

\begin{equation*} \frac{d^4 y}{dx^4} = -\delta(x-1) , \end{equation*}

and the endpoint conditions are

\begin{equation*} y(0) = 0, \qquad y''(0) = 0, \qquad y(2) = 0, \qquad y''(2) = 0. \end{equation*}

We could integrate, but using the Laplace transform is even easier. We apply the transform in the \(x\) variable rather than the \(t\) variable. Let us again denote the transform of \(y(x)\) as \(Y(s)\text{.}\)

\begin{equation*} s^4Y(s)-s^3y(0)-s^2y'(0)-sy''(0)-y'''(0) = -e^{-s}. \end{equation*}

We notice that \(y(0) = 0\) and \(y''(0) = 0\text{.}\) Let us call \(C_1 = y'(0)\) and \(C_2=y'''(0)\text{.}\) We solve for \(Y(s)\text{,}\)

\begin{equation*} Y(s) = \frac{-e^{-s}}{s^4} + \frac{C_1}{s^2}+ \frac{C_2}{s^4} . \end{equation*}

We take the inverse Laplace transform utilizing the second shifting property (6.1) to take the inverse of the first term.

\begin{equation*} y(x) = \frac{-{(x-1)}^3}{6} u(x-1) + C_1 x + \frac{C_2}{6} x^3 . \end{equation*}

We still need to apply two of the endpoint conditions. As the conditions are at \(x=2\) we can simply replace \(u(x-1) = 1\) when taking the derivatives. Therefore,

\begin{equation*} 0 = y(2) = \frac{-{(2-1)}^3}{6} + C_1 (2) + \frac{C_2}{6} 2^3 = \frac{-1}{6} + 2 C_1 + \frac{4}{3} C_2 , \end{equation*}

and

\begin{equation*} 0 = y''(2) = \frac{-3\cdot 2 \cdot (2-1)}{6} + \frac{C_2}{6} 3\cdot 2 \cdot 2 = -1 + 2 C_2 . \end{equation*}

Hence \(C_2 = \frac{1}{2}\) and solving for \(C_1\) using the first equation we obtain \(C_1 = \frac{-1}{4}\text{.}\) Our solution for the beam deflection is

\begin{equation*} y(x) = \frac{-{(x-1)}^3}{6} u(x-1) - \frac{x}{4} + \frac{x^3}{12} . \end{equation*}

Subsection 6.4.5 Exercises

Exercise 6.4.1.

Solve (find the impulse response) \(x'' + x' + x = \delta(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0)=0\text{.}\)

Exercise 6.4.2.

Solve (find the impulse response) \(x'' + 2 x' + x = \delta(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0)=0\text{.}\)

Exercise 6.4.3.

A pulse can come later and can be bigger. Solve \(x'' + 4 x = 4\delta(t-1)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0)=0\text{.}\)

Exercise 6.4.4.

Suppose that \(f(t)\) and \(g(t)\) are differentiable functions and suppose that \(f(t) = g(t) = 0\) for all \(t \leq 0\text{.}\) Show that

\begin{equation*} (f * g)'(t) = (f' * g)(t) = (f * g')(t) . \end{equation*}

Exercise 6.4.5.

Suppose that \(L x = \delta(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0) = 0\text{,}\) has the solution \(x = e^{-t}\) for \(t > 0\text{.}\) Find the solution to \(Lx = t^2\text{,}\) \(x(0) = 0\text{,}\) \(x'(0) = 0\) for \(t > 0\text{.}\)

Exercise 6.4.6.

Compute \({\mathcal{L}}^{-1} \left\{ \frac{s^2+s+1}{s^2} \right\}\text{.}\)

Exercise 6.4.7.

(challenging)   Solve Example 6.4.3 via integrating 4 times in the \(x\) variable.

Exercise 6.4.8.

Suppose we have a beam of length \(1\) simply supported at the ends and suppose that force \(F=1\) is applied at \(x=\frac{3}{4}\) in the downward direction. Suppose that \(EI=1\) for simplicity. Find the beam deflection \(y(x)\text{.}\)

Exercise 6.4.101.

Solve (find the impulse response) \(x'' = \delta(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0)=0\text{.}\)

Answer.

\(x(t) = t\)

Exercise 6.4.102.

Solve (find the impulse response) \(x' + a x = \delta(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0)=0\text{.}\)

Answer.

\(x(t) = e^{-at}\)

Exercise 6.4.103.

Suppose that \(L x = \delta(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0) = 0\text{,}\) has the solution \(x(t) = \cos(t)\) for \(t > 0\text{.}\) Find (in closed form) the solution to \(Lx = \sin(t)\text{,}\) \(x(0) = 0\text{,}\) \(x'(0) = 0\) for \(t > 0\text{.}\)

Answer.

\(x(t) = (\cos * \sin)(t) = \frac{1}{2} t \sin(t)\)

Exercise 6.4.104.

Compute \({\mathcal{L}}^{-1} \left\{ \frac{s^2}{s^2+1} \right\}\text{.}\)

Answer.

\(\delta(t) - \sin(t)\)

Exercise 6.4.105.

Compute \({\mathcal{L}}^{-1} \left\{ \frac{3 s^2 e^{-s} + 2}{s^2} \right\}\text{.}\)

Answer.

\(3 \delta(t-1) + 2 t\)

Named after the English physicist and mathematician Paul Adrien Maurice Dirac 2  (1902–1984).
https://en.wikipedia.org/wiki/Paul_Dirac
You should really think of the integral going over \((-\infty,\infty)\) rather than over \([0,t]\) and simply assume that \(f(t)\) and \(x(t)\) are continuous and zero for negative \(t\text{.}\)
Named for the Swiss mathematicians Jacob Bernoulli 5  (1654–1705), Daniel Bernoulli 6  (1700–1782), the nephew of Jacob, and Leonhard Paul Euler 7  (1707–1783).
https://en.wikipedia.org/wiki/Jacob_Bernoulli
https://en.wikipedia.org/wiki/Daniel_Bernoulli
https://en.wikipedia.org/wiki/Euler
\(E\) is the elastic modulus and \(I\) is the second moment of area. Let us not worry about the details and simply think of these as some given constants.
For a higher quality printout use the PDF version: https://www.jirka.org/diffyqs/diffyqs.pdf