24
Lecture 24 Laplace transforms (cont’d) The “Dirac delta function” (cont’d) We now return to the substance X problem examined in the previous lecture where an amount A of substance X is added to a beaker instantaneously at a time a> 0. After our introduction of the Dirac delta function, we claim that the “function” which models the about instantaneous addition is f (t)= (t - a). (1) As a result, the differential equation for x(t) modelling the instantaneous addition of A to the beaker at time a is dx dt = -kx + (t - a). (2) Let’s once again derive the solution of this DE, but starting at the beginning. We’ll consider the more general DE, dx dt = -kx + f (t) . (3) Taking LT’s of both sides (it’s only a first order DE), we obtain sX (s) - x 0 = -kX (s)+ F (s) . (4) Now isolate X (s) on the LHS, X (s)= x 0 s + k + 1 s + k F (s) . (5) We now formally take inverse Laplace transforms of both sides, x(t) = x 0 L 1 1 s + k + L 1 1 s + k F (s) = x 0 e kt + L 1 1 s + k F (s) . (6) The second term on the RHS involves finding the inverse Laplace transform of a product of two Laplace transforms. From the Convolution Theorem, and the fact, which we have already 222

Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Lecture 24

Laplace transforms (cont’d)

The “Dirac delta function” (cont’d)

We now return to the substance X problem examined in the previous lecture where an amount

A of substance X is added to a beaker instantaneously at a time a > 0. After our introduction

of the Dirac delta function, we claim that the “function” which models the about instantaneous

addition is

f(t) = Aδ(t− a). (1)

As a result, the differential equation for x(t) modelling the instantaneous addition of A to the

beaker at time a isdx

dt= −kx + Aδ(t− a). (2)

Let’s once again derive the solution of this DE, but starting at the beginning. We’ll consider

the more general DE,dx

dt= −kx+ f(t) . (3)

Taking LT’s of both sides (it’s only a first order DE), we obtain

sX(s)− x0 = −kX(s) + F (s) . (4)

Now isolate X(s) on the LHS,

X(s) =x0

s+ k+

1

s+ kF (s) . (5)

We now formally take inverse Laplace transforms of both sides,

x(t) = x0L−1

[

1

s+ k

]

+ L−1

[

1

s+ kF (s)

]

= x0e−kt + L−1

[

1

s+ kF (s)

]

. (6)

The second term on the RHS involves finding the inverse Laplace transform of a product of

two Laplace transforms. From the Convolution Theorem, and the fact, which we have already

222

Page 2: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

used, that

L[e−kt] =1

s+ k, (7)

Eq. (6) becomes,

x(t) = x0e−kt + e−kt ∗ f(t) . (8)

From (1) we now replace f(t) with Aδ(t− a), i.e.,

x(t) = x0e−kt + Aδ(t− a) ∗ e−kt . (9)

We now compute the convolution involving the Dirac delta function as follows,

Aδ(t− a) ∗ e−kt = e−kt∗ Aδ(t− a) (10)

= A

∫ t

0

e−k(t−τ)δ(τ − a) dτ

Note that to obtain the function on the left at time t ≥ 0, we must integrate over the variable

τ from 0 to t. The only contribution to this integral comes at the point τ = a. As such, the

integral is zero for 0 ≤ t < a. For t ≥ a, this integral is nonzero and becomes

A

∫ t

0

e−k(t−τ)δ(τ − a) dτ = Ae−k(t−a), t ≥ a. (11)

Therefore, the solution x(t) becomes

x(t) = x0e−kt + Ae−k(t−a)H(t− a) . (12)

This is in agreement with the result obtained at the beginning of the previous lecture by

considering the behaviour of x(t) for t < a, i.e., before the instantaneous addition of A units

of X and for t ≥ a, after the addition. As such, we have shown that term Aδ(t − a) correctly

models the instantaneous addition of A units of substance X at time t = a.

223

Page 3: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Application of the Dirac delta function to Newtonian mechanics

In what follows, we consider the motion of a particle of mass m in one dimension. In what

follows, let x(t) denote its position on the x-axis. If a force F (x) acts on the mass, then it will

move according to Newton’s second law

ma = mdv

dt= F (x(t)). (13)

If we integrate the above equation from t = t1 to t = t2 = t1 +∆t, then

mv(t2)−mv(t1) = m∆v =

∫ t2

t1

F (x(t)) dt. (14)

Recall that the momentum of the particle is given by p = mv. Since the mass of the object is

assumed to remain constant, the left side of the above equation is the change in momentum of

the mass over the time interval [t1, t2]. The right side is referred to as the impulse. The above

equation states that

change in momentum = impulse.

If the force F is constant over the time interval, then we have

∆p = F∆t = I, (15)

where I = F∆t will denote the impulse associated with the interaction.

Now consider the following situation. We assume that a nonzero force of constant magnitude

F acts on the particle over the time interval [a, a+∆t], where a > 0 and ∆t > 0. At all other

times, the force acting on the particle is assumed to be zero. A sketch of the graph of F is

shown above.

Let us now examine the velocity of the particle. Suppose that its initial velocity is v(0) =

v0 > 0. Because no force acts on the particle over the time interval [0, a], its velocity will remain

unchanged, i.e., v(t) = v0 for 0 ≤ t ≤ a.

Over the time interval [a, a+∆t], Newton’s equation becomes

mdv

dt= F or

dv

dt=

F

m. (16)

224

Page 4: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

0 a a + ∆t

F

t

F (t) vs. t

Integrating this DE from t = a to a t ∈ [a, a+∆t] gives

v(t)− v0 =F

m(t− a) or v(t) = v0 +

F

m(t− a). (17)

As expected, v(t) increases linearly in time over this interval. At the end of this interval,

t = a+∆t, the velocity is

v1 = v0 +F

m. (18)

And for t > a+∆t, v(t) = v1 since there is no force acting on the mass. A graph of v(t) vs. t

is sketched below.

0 a a + ∆tt

v0

v1

v(t) vs. t

The important feature of this graph is that the net change in the velocity due to the action

of the force F over the time interval ∆t is

∆v = v1 − v0 =F

m∆t =

I

m. (19)

Now suppose that we decrease the time interval ∆t but simultaneously increase the mag-

nitude F of the force so that the impulse I = F∆t – the area under the nonzero part of the

graph of F – remains constant. (This is analogous to the function f∆(t) used to model the

addition of A units of substance X over a time interval of length ∆.) This implies that the

225

Page 5: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

same change of velocity ∆v = v1 − v0 would be produced over a smaller time intervals. In the

limit ∆t → 0, the velocity function v(t) will become discontinuous at t = a, with a jump of

∆v = v1 − v0 = I/m. The result is sketched below.

0 at

v0

v1

v(t) vs. t

Once again, the discontinuous jump in velocity at t = a is due to an idealized impulse of

strength I = m(v1 − v0) = m∆v applied at t = a. Mathematically, this can be expressed by

the DE,

mdv

dt= Iδ(t− a), (20)

ordv

dt=

I

mδ(t− a) = ∆vδ(t− a). (21)

If we integrate this DE from time t = 0 to a time t > 0, then we obtain

v(t) = v0 + (v1 − v0)

∫ t

0

δ(τ − a) dτ (22)

= v0 + (v1 − v0)H(t− a), t ≥ 0

=

v0, 0 ≤ t < a,

v1, t ≥ a.

The graph of v(t) coincides with the graph sketched above.

We can integrate this discontinuous velocity-time graph to obtain the position x(t). The

graph of x(t) will have two components: (1) a straight line of slope v0 for 0 ≤ t ≤ a, (2) a

straight line of slope v1 for t > a. The qualitative behaviour of this graph is sketched below.

The most noteworthy feature of this graph is that it is not discontinuous – there is no jump

in the position x(t) at t = a. Of course, there cannot be such a jump for it would imply that

the velocity at that time would be infinite, which it is not.

226

Page 6: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

0 at

x(t) vs. t

x0 slope v0

slope v1

Newton’s equation in (20) can also be expressed in terms of the function x(t) as

md2x

dt2= Iδ(t− a) = m∆vδ(t− a). (23)

227

Page 7: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Lecture 25

Laplace Transforms (cont’d)

Return to “Substance X problem”: Periodic application of Dirac delta function

Let us once again return to our “substance X problem.” Left alone in a beaker, substance X

decays according to the rate lawdx

dt= −kx, (24)

where x(t) represents the concentration of X . We then saw how the instantaneous addition of

an amount A to the beaker at time t = a can be represented by the Dirac delta function – the

resulting evolution equation for x(t) becomes

dx

dt= −kx+ Aδ(t− a), x(0) = x0. (25)

The solution to this problem was obtained using Laplace transforms:

x(t) = x0e−kt + Ae−k(t−a)H(t− a). (26)

As a side remark, the solution to the DE in (25) can also be found using the method of

integrating factors for first-order linear DEs. We leave this as an exercise for the reader.

Not surprisingly, the above solution x(t) → 0 as t → ∞. Unless we continue to add more

X to the beaker, the concentration of X will decay to zero. So let us consider adding A units

of X to the beaker at regular intervals, say at times tn = nT for n = 1, 2, · · · , where T > 0

is the time interval between additions. There is now no opportunity for x(t) to decay to zero.

We wonder, however, whether is is possible that x(t) approaches some kind of “equilibrium

situation.” Obviously this equilibrium situation could not be a constant solution, since we

are adding an amount A at each time step tn = nT . But after each amount A is added, the

concentration x(t) decreases. Is it possible that after A is added, and we have an amount y in

the beaker, the system will decrease to an amount y−A after time T so that when an amount

A is added, we start with y again, only to repeat the cycle? The situation is pictured below.

We can easily solve for the unknown y for which such a periodic solution would exist.

If, at the start of each cycle, the concentration of X at a time tn = nT is y, then at time

228

Page 8: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

0 T 2T 3Tt

y

y − A

tn+1 = (n + 1)T , i.e. after a time lapse T , the concentration in the beaker will be ye−kT .

Periodicity of the solution requires that

ye−kT + A = y, (27)

which is satisfied by

y =A

1− e−kT. (28)

Our periodic solution would then be given as follows: For n = 1, 2, · · · ,

x(t) =A

1− e−kTe−k(t−nT ), for nT ≤ t < (n− 1)T. (29)

Let us see if this periodic solution is in fact approached as t → ∞.

We can model this regular addition ofX by means of an infinite sum of Dirac delta functions

that are positioned at the times tn = nT , i.e.,

dx

dt= −kx+ f(t), (30)

where

f(t) =∞∑

n=1

Aδ(t− nT ). (31)

We can solve this DE using LTs in the same way as we did for the DE in (25). Just to recall,

take LTs to obtain

sX(s)− x0 = −kX(s) + F (s), (32)

where F (s) = L[f ]. Then solve for X(s):

X(s) =x0

k + s+

F (s)

k + s. (33)

229

Page 9: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Then take inverse LTs to give

x(t) = x0e−kt + (g ∗ f)(t) (34)

= xh(t) + xp(t),

where

g(t) = L−1

[

1

s+ k

]

= e−kt. (35)

Note that xh(t) = x0e−kt → 0 as t → ∞. We now examine the behaviour of xp(t) = (g ∗ f)(t)

to see whether it approaches the periodic solution constructed above. Let us first construct

xp(t):

xp(t) = (g ∗ f)(t) =

∫ t

0

g(t− τ)f(τ) dτ (36)

= A∞∑

n=1

∫ t

0

e−k(t−τ)δ(τ − nT ) dτ

= Ae−kt

∞∑

n=1

∫ t

0

ekτδ(τ − nT ) dτ.

As t increases from 0, the integral

∫ t

0

ekτδ(τ − nT ) dτ (37)

contributes only when t crosses the value nT : Its contribution is eknT . Let us examine a couple

of cases:

1. For T ≤ t < 2T , xp(t) = Ae−ktekT .

2. For 2T ≤ t < 3T , xp(t) = Ae−kt[ekT + e2kT ].

Let us now consider the general case nT ≤ t < (n + 1)T . We shall write t = nT + u, where

0 ≤ u < T . In this case

xp(nT + u) = Ae−k(nT+u)[ekT + e2kT + · · ·+ eknT ] (38)

= Ae−ku[1 + e−kT + e−2kT + · · ·+ e−(n−1)kT ].

230

Page 10: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

The expression in the square brackets is the partial sum of a geometric series with ratio r =

e−kT < 1. The sum of this convergent geometric series is

1

1− r=

1

1− ekT. (39)

Therefore, as n → ∞, the solution xp in (38) approaches the periodic solution that we con-

structed in Eq. (29):

xp(nT + u) →A

1− e−kTe−ku, as n → ∞. (40)

231

Page 11: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Lecture 26

Laplace Transforms (cont’d)

Damped oscillator with “kicking”

Consider the following problem,

x′′ + 3x′ + 2x = Aδ(t− 1), x(0) = x0, x′(0) = v0, A > 0. (41)

This can be viewed as the “kicking” of a damped oscillator at time t = 1. The left-side of

this equation models an overdamped oscillator, since the general solution of the homogeneous

equation

x′′ + 3x′ + 2x = 0, (42)

is

x(t) = C1e−2t + C2e

−t. (43)

The right side of Eq. (41) imparts an impulse of strength A to the system. From our ear-

lier discussion, we expect this impulse to produce a discontinuity in the velocity function of

magnitude A at time t = 1, i.e.

x′(1+)− x′(1−) = A. (44)

Let us solve this problem using Laplace Transforms, but leaving the RHS in the more general

form,

x′′ + 3x′ + 2x = f(t) , x(0) = x0, x′(0) = v0, A > 0. (45)

where

f(t) = Aδ(t− 1) . (46)

Taking LTs of both sides of (45) gives

(s2 + 3s+ 2)X(s)− sx0 − v0 − 3x0 = F (s) , (47)

Now solve for X(s):

X(s) =x0s+ v0 + 3x0

x2 + 3s+ 2+

F (s)

s2 + 3s+ 2, (48)

232

Page 12: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

which we shall also write as

X(s) = (x0s+ v0 + 3x0)G(s) + G(s)F (s) , (49)

where

G(s) =1

s2 + 3s+ 2(50)

is the transfer function associated with the DE in (45).

Taking inverse LTs of both sides of (48) at least formally gives

x(t) = L−1

[

x0s+ v0 + 3x0

x2 + 3s+ 2

]

+ L−1

[

F (s)

s2 + 3s+ 2

]

(51)

= xh(t) + xp(t)

The function xh(t) will be the solution to the homogeneous DE (42) and is rather easily found,

using techniques that you probably saw in AMATH 250/251 or equivalent, to be

xh(t) = −(x0 + v0)e−2t + (2x0 + v0)e

−t. (52)

Of greater interest is the particular solution

xp(t) = L−1

[

F (s)

s2 + 3s+ 2

]

, (53)

which will be the response to the system due to the input function Aδ(t−1). We may compute

this inverse LT in at least two ways, e.g., (i) by convolution, (ii) by a shift theorem.

1. Method No. 1 - convolution: We shall write the right-side as

L−1[G(s)F (s)] (54)

where, once again,

G(s) =1

s2 + 3s+ 2, (55)

and

F (s) = L[f(t)] = AL[δ(t− 1)] . (56)

233

Page 13: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

We’ll use the convolution theorem to compute the above inverse LT, i.e.,

L−1[G(s)F (s)] = (g ∗ f)(t) , (57)

where g(t) and f(t) are the inverse LTs of G(s) and F (s), respectively.

We can easily find the inverse LT of G(s), the transfer function, using partial fractions:

1

s2 + 2s+ 2= −

1

s + 2−

1

s+ 1, (58)

implying that

g(t) = −e−2t + e−t. (59)

For future reference, note that

g(0) = 0, g′(0) = 1. (60)

The inverse transform of F (s) is the original kicking term f(t) = Aδ(t− 1) – that’s how

we got F (s) in the first place!

From these results, we compute the inverse transform as follows,

L−1[G(s)F (s)] = (g ∗ f)(t) (61)

= A

∫ t

0

g(t− τ)δ(τ − 1) dτ.

Because of the Dirac delta function δ(τ −1) in the integrand, the only contribution to the

integral is at the point τ = 1. Therefore this integral is zero for all t < 1: since t cannot

reach the value 1, the integral will be zero. When t ≥ 1, the contribution to the integral

is g(t− 1). Therefore the final result is

xp(t) = L−1[G(s)F (s)] = Ag(t− 1)H(t− 1). (62)

where the Heaviside step function ensures that there is no contribution until t = 1.

2. Method No. 2 - Shift Theorem: We shall once again consider Eq. (54 but with the

explicit form of the Laplace transform F (s). Recall that

F (s) = L[f(t)] = AL[δ(t− 1)] . (63)

234

Page 14: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

We can easily compute this Laplace transform using the definition of the LT:

L[δ(t− 1)] =

0

e−stδ(t− 1) dt

= e−s , (64)

so that

F (s) = Aes . (65)

This implies that

xp(t) = L−1[e−sG(s)] . (66)

We now recall the following Shift Theorem:

L[g(t− p)H(t− p)] = e−psG(s) , (67)

which implies that

L−1[e−psG(s)] = g(t− p)H(t− p) (68)

Therefore, we find, using p = 1,

xp(t) = Ag(t− 1)H(t− 1), (69)

in agreement with the result of Method No. 1.

In summary, the solution to our kicked oscillator problem is

x(t) = xh(t) + Ag(t− 1)H(t− 1). (70)

This solution is simply a linear combination of the homogeneous solution xh(t) and the response

Ag(t − 1)H(t − 1). But the response xp(t) is not added until t = 1. Recall that we found

g′(0) = 1. This means that the sudden addition of xp(t) to xh(t) at t = 1 will produce an

instanteous increase in the slope x′(t) by A at t = 1.

We illustrate with an example. Let the initial conditions be x0 = 1, v0 = 0 and the strength of

the impulse be A = 2. From (52), the solution x(t) for 0 ≤ t < 1 will be

x(t) = xh(t) = −e−2t + 2e−t, 0 ≤ t < 1. (71)

235

Page 15: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

At t = 1, when the oscillator is kicked, the response xp(t) is added and we have

x(t) = xh(t) + xp(t) (72)

= −e−2t + 2e−t + 2g(t− 1)H(t− 1)

= −e−2t + 2e−t + [−2e−(t−1) + e−(t−1)]H(t− 1), t ≥ 1.

A plot of the solution x(t) is shown in the figure below. At t = 1 its derivative x′(t) undergoes

a jump of 2. As a result x(t) increases for a while before it begins a monotonic decrease toward

zero. Had the impulse not been applied, the solution x(t) would have continued along the lower

curve.

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0 1 2 3 4 5

x(t)

t

Appendix: Periodic “kicking” of damped oscillator

The following section was not discussed in class, but is presented as additional

information for those interested.

In spite of being “kicked” at time t = 1, all solutions of the above kicked oscillator problem

decay to zero as t → ∞. However, what would happen if we kept kicking the oscillator in

order not to give it a chance to decay to rest? Let us examine the following general form of a

periodically kicked oscillator,

x′′ + 3x′ + 2x = A

∞∑

n=1

δ(t− nT ), (73)

236

Page 16: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

where A > 0 is the strength of the impulse that is applied at regular intervals of T > 0. From

our earlier experience with the “substance X” problem, we know that solutions to (73) will

behave qualitatively as follows:

1. Between kicks, i.e., nT < (n+ 1)T , for n = 1, 2, · · · , solutions will decrease in time.

2. Immediately after each kick at t = nT , n = 1, 2, · · · , an additional velocity of A will be

imparted to the solution. As a result, the velocity function x′(t) will have discontinuities

at times t = nT , n = 1, 2, · · · . The position function x(t) will remain continuous, but

“kinks” such as those observed for the single-kicked case will be evident in the graph of

x(t) vs. t.

One could work out the functional form of the solution to this problem using Laplace

Transforms and convolution. The general form of the solution will be, as before,

x(t) = xh(t) + xp(t). (74)

Here xh(t) is once again the solution to the homogeneous equation satisfying the conditions

xh(0) = x0 and x′

h(0) = v0. And xp(t) is the response to the kicking, given by the convolution

xp(t) = (g ∗ f)(t) (75)

= A

∞∑

n=1

∫ t

0

g(t− τ)δ(τ − nT ) dτ,

where, you will recall,

g(t) = −e−2t + e−t (76)

is the inverse LT of the transfer function

G(s) =1

x2 + 3s+ 2. (77)

But we shall not go into any more details here, leaving them as an exercise for the reader.

Our main concern here is the long term behaviour of solutions (74). We claim – and also

leave it as an exercise – that all nontrivial solutions to the kicked oscillator problem approach

a unique periodic solution x̄(t) as t → ∞. On the next page, the results of two numerical

237

Page 17: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

x(t)

t

x’’ + 3x’ + 2x = 2 sum_n delta(t-n)

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

x(t)

t

x’’ + 3x’ + 2x = 2 sum_n delta(t-n)

Solutions to x′′ + 3x′ + 2x = 2∞∑

n=1

δ(t− nT ) obtained numerically. Top: x(0) = 1, x′(0) = 0.

Bottom: x(0) = 2, x′(0) = 0. In both cases x(t) is observed to approach a periodic solution

x̄(t).

238

Page 18: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

calculations are presented – in each of these examples, the solution x(t) is observed to approach

a periodic solution.

One can work hard to determine the nature of this solution by examining the exact solution

x(t). But we can also determine this periodic solution x̄(t) in an easier manner – by simply

constructing a periodic solution that demonstrates the following properties:

1. x̄(t) is periodic with period T , i.e. x̄(t+ T ) = x̄(t).

2. Between kicks, i.e., when the right side of Eq. (73) is zero, the solution will behave as

x̄(t) = c1e−2t + c2e

−t, nT < t < (n+ 1)T, n = 1, 2, · · · . (78)

3. The difference between the velocity v̄(t) = x̄′(t) immediately before and immediately after

a kick is equal to A, the strength of the impulse.

This qualitative behaviour can be observed in the two numerical solutions that were shown

above.

To construct such a periodic solution, we begin with the following form,

x̄(t) = c1e−2t + c2e

−t. (79)

It remains to find the coefficients c1 and c2 that determine x̄(t) between kicks. To make

calculations easier, we can focus on the interval [0, T ]. If we impose the condition of periodicity

at the endpoints, then

x̄(0) = x̄(T ) implies that c1 + c2 = c1e−2T + c2e

−T . (A) (80)

We now impose the condition of discontinuity of the derivative at the points of kicking t = nT .

This can be done conveniently by requiring that

x̄′(0) = x̄′(T ) + A, implying that − 2c1 − c2 = −2c1e−2T

− c2e−T + A. (B) (81)

Conditions (A) and (B) above represent two linear equations in the unknowns c1 and c2. The

solution to this system is (exercise):

c1 = −A

1− e−2T, c2 =

A

1− e−T. (82)

239

Page 19: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

We have therefore determined our periodic solution x̄(t). As a test, we return to the examples

presented in the figures, where T = 1. The numerical calculations indicate that the value of

x̄(t) at the kicking points t = nT is approximately 0.85092. According to (82), the theoretical

value is

x̄(0) = c1 + c2 ≈ 0.850918128. (83)

240

Page 20: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Laplace transforms of linear time-invariant systems

We now consider the use of Laplace transforms to solve linear time-invariant systems of the

form,

x′ = Ax+ f(t) , x(0) = x0 . (84)

Here, x(t) and f(t) are n-vectors and A is an n× n constant matrix.

Brief review of scalar case

It is instructive to review the scalar case, i.e., the one-dimensional version of Eq. (84), a DE

in the real-valued function x(t),

dx

dt= ax+ f(t) x(0) = x0 . (85)

First take Laplace transforms (LTs) of both sides,

sX(s)− x0 = aX(s) + F (s) , (86)

and rearrange as follows,

(s− a)X(s) = x0 + F (s) . (87)

Now solve for X(s),

X(s) =x0

s− a+

1

s− aF (s) . (88)

We’ll rewrite this equation as

X(s) = x0G(s) +G(s)F (s) (89)

where

G(s) =1

s− a=⇒ g(t) = L−1[G(s)] = eat . (90)

Taking inverse LTs of both sides of (89), we obtain

x(t) = x0eat + eat ∗ f(t)

= x0eat +

∫ t

0

ea(t−τ)f(τ) dτ . (91)

241

Page 21: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Return to the vector case

We now return to the vector linear system,

x′ = Ax+ f(t) , x(0) = x0 . (92)

Formally, let us take Laplace transforms – whatever this means – of both sides of the above

equation, i.e.,

L[x′] = L[Ax] + L[f(t)] . (93)

We must now face the question: What is the Laplace transform of a vector, i.e., what is

L[x(t)] = L

x1(t)

x2(t)...

xn(t)

? (94)

It would seem quite natural that the above Laplace transform is the n-vector composed of the

LTs of each of the individual functions, i.e.,

L[x(t)] = L

x1(t)

x2(t)...

xn(t)

=

L[x1(t)]

L[x2(t)]...

L[xn(t)]

=

X1(t)

X2(t)...

Xn(t)

=: X(s) . (95)

If we adopt this definition, then the other standard formulas for LTs will apply. For example,

without writing the vectors in component form, we have

L[x′(t)] = sX(s)− x0 . (96)

Finally, we must determine

L[Ax] (97)

in Eq. (92). First of all, let

y(t) = Ax(t) . (98)

242

Page 22: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

y(t) is an n-vector. The LT of this n-vector is the n-vector of the components yi(t) of y(t).

From (98), the i-th component of this vector is given by

yi(t) =n

i=1

aijxj(t) . (99)

The LT of yi(t) is, by linearity,

Yi(s) =

n∑

i=1

aijXj(s) . (100)

In other words, the action of the matrix A on the n-vector Y(s) is the same as the action of

A on the n-vector x. As such,

Y(s) = L[y(t)] = L[Ax(t)] = AL[x(t)] = AX(s) .. (101)

We use all of these results to rewrite Eq. (93) as follows,

sX(s)− x0 = AX(s) + F(s) . (102)

This is the LT of Eq. (92). It is also the vector-valued version of Eq. (86). We now rearrange

this equation to solve for X(s),

(sI−A)X(s) = x0 + F(s) . (103)

This is the vector-valued version of Eq. (87). We must now solve for X(s):

X(s) = (sI−A)−1x0 + (sI−A)−1F(s) . (104)

This is the vector-valued version of Eq. (88). Note that we cannot take the reciprocal of the

matrix sI−A: we must take its inverse.

We now rewrite the above equation in X(s) as follows,

X(s) = G(s)x0 +G(s)F(s) . (105)

Here, G(s),

G(s) = (sI−A)−1 , (106)

an n×nmatrix, is the transfer matrix of the linear system in (92). Eq. (105) is the vector/matrix

version of Eq. (89).

243

Page 23: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

The homogeneous DE

Let us now consider the special case f(t) = 0, i.e., the homogeneous linear system,

x′ = Ax , x(0) = x0 . (107)

We’ll use the LT derived in Eq. (105), however, setting F(s) = 0, so that

X(s) = G(s)x0 . (108)

Taking inverse LTs of both sides yields the solution to the homogeneous system,

x(t) = L−1[G(s)]x0 . (109)

But from our previous studies of this DE, we know that this solution may be expressed as

x(t) = etAx0 . (110)

If we take LTs of this equation,

X(s) = L[etA]x0 , (111)

and compare this result with Eq. (108), we obtain the remarkable result,

L[etA] = G(s) , (112)

or

L[etA] = (sI−A)−1 . (113)

At first sight, this result may seem rather peculiar, but let’s recall the scalar-valued result,

L[eat] =1

s− a, Re(s) > Re(a) . (114)

In the scalar case, we can take reciprocals but in the vector/matrix case, we must take inverses.

Furthermore, the scalars a and s are replaced with, respectively, the matrices A and sI.

244

Page 24: Lecture24 Laplace transforms(cont’d)links.uwaterloo.ca/amath351docs/set10.pdf · We now formally take inverse Laplace transforms of both sides, x(t) = x0L−1 1 s+k +L−1 1 s+k

Note: There is one technical matter regarding Eq. (113). The matrix sI−A must be invertible.

This is guaranteed if none of its eigenvalues are zero. This is guaranteed if

Re(s) > Re(λ) (115)

for all eigenvalues λ of A.

Finally, we take inverse LTs of both sides of Eqs. (112) and (113) to obtain the important

results,

etA = L−1[G(s)] = L−1[(sI−A)−1] . (116)

The inhomogeneous DE

We now return to Eq. (105), the LT of the inhomogeneous DE, i.e.,

X(s) = G(s)x0 +G(s)F(s) . (117)

Now take inverse LTs of both sides of this equation,

x(t) = L−1[G(s)]x0 + L−1[G(s)F(s)] . (118)

From (116) and the convolution theorem, we have

x(t) = etAx0 + etA ∗ f(t) . (119)

Writing out the convolution, we obtain the result,

x(t) = etAx0 +

∫ t

0

e(t−τ)Af(τ) dτ , (120)

This is the solution to the inhomogeneous system,

x′(t) = Ax+ f(t) , (121)

which satisfies the initial condition,

x(0) = x0 . (122)

Eq. (120) agrees with the result obtained by “variation of parameters” in Lecture 20 (Week 8).

245