214
CHAPTER - I LAPLACE TRANSFORM 1.1 INTRODUCTION The analysis and design of many physical systems is based upon the solution of an ordinary linear dierential equation with constant coecients. This is, in fact, an idealization of the actual process. Nevertheless, in a dened operating region, many systems can be described by an ordinary linear di erential equation. The procedure is illustrated in Fig. 1.1. In general, physical laws are applied to the physical system to obtain its mathematical de- scription. Since the operating region of many systems covers a small range, this mathematical description can, therefore, be linearized to give ordinary linear dierential equations. Later, initial conditions, if any, can be added and the solution obtained to give information for design or analysis. The solution of such equations can either be obtained by known methods of ordinary linear dierential equations or by transform methods. The former method of substituting an assumed solution in the di erential equation and then nding the values of constants is quite laborious. In the latter, the use of the Laplace transform changes the dierential equation into an algebraic equation. This transformation changes the dierential equation with time as the independent variable into an algebraic equation with s as the independent variable. The initial conditions are added automatically during the process of transformation and the time solution is found by inverting the transformed equations. This method is also applicable to partial dierential equations. 1

Linear Transforms

Embed Size (px)

Citation preview

Page 1: Linear Transforms

CHAPTER - I

LAPLACE TRANSFORM

1.1 INTRODUCTION

The analysis and design of many physical systems is based upon the solution of an ordinary

linear differential equation with constant coefficients. This is, in fact, an idealization of the

actual process. Nevertheless, in a defined operating region, many systems can be described

by an ordinary linear differential equation. The procedure is illustrated in Fig. 1.1.

In general, physical laws are applied to the physical system to obtain its mathematical de-

scription. Since the operating region of many systems covers a small range, this mathematical

description can, therefore, be linearized to give ordinary linear differential equations. Later,

initial conditions, if any, can be added and the solution obtained to give information for

design or analysis. The solution of such equations can either be obtained by known methods

of ordinary linear differential equations or by transform methods. The former method of

substituting an assumed solution in the differential equation and then finding the values of

constants is quite laborious. In the latter, the use of the Laplace transform changes the

differential equation into an algebraic equation. This transformation changes the differential

equation with time as the independent variable into an algebraic equation with s as the

independent variable. The initial conditions are added automatically during the process of

transformation and the time solution is found by inverting the transformed equations. This

method is also applicable to partial differential equations.

1

Page 2: Linear Transforms

1.2 DEFINITION

Let f(t) be a function of t with the following properties:

1. f(t) is identically zero for t < 0

2. f(t) is continuous from the right at t = 0

Mathematically,

1. f(t) = 0 for t < 0

2. limt→0 f(t) = f (0) for t > 0

Then the Laplace transform of f (t), denoted by L[f (t)], is defined as

L[f(t)] = F (s) =

Z ∞

0

e−stf(t) dt (1.1)

where s = σ + jω is a complex number. The Laplace transform of Eq. (1.1) shall exist ifand only if the integral converges for some values of σ.

1.3 SUFFICIENT CONDITIONS FOR EXISTENCE OF LAPLACE TRANSFORMS

THEOREM 1.1: If f(t) is sectionally continuous in every finite interval 0 < t < Nand of exponential order for t > N , then its Laplace transform F (s) exists.

PROOF: In order to prove the theorem, let us define the function of exponential order.

Definition 1.1 If real constants M > 0 and γ exist such that for all t > N ,

|e−γtf(t)| < M or |f (t)| < Meγt (1.2)

we say that f(t) is a function of exponential order γ as t→∞.

Now, we have for any positive number N ,

Z ∞

0

e−stf(t)dt =

Z N

0

e−stf (t)dt+

Z ∞N

e−stf(t)dt (1.3)

Since f (t) is sectionally continuous in every finite interval 0 ≤ t ≤ N , the first integral onthe right exists. Also, the second integral on the right exists, since f (t) is of exponentialorder for t > N . To see this, let us observe that¯Z ∞

N

e−stf (t) dt

¯≤Z ∞

N

|e−stf(t)| dt (1.4)

≤Z ∞

0

e−σt|f (t)| dt, s = σ + jω

<

Z ∞

0

e−σtMeγt dt

=M

σ − γ, σ > γ (1.5)

2

Page 3: Linear Transforms

Thus the Laplace transform exists for Re{s} > γ.

If this sufficient condition is not satisfied, then the Laplace transform for f(t) may or maynot exist.

1.4 LAPLACE TRANSFORM OF FUNCTIONS

We consider here some elementary functions which are used in engineering problems.

Example 1.1 Unit Impulse Function

The unit impulse function or delta function is denoted by the symbol δ(t) (Fig. 1.2), and isdefined by the relationships

δ(t) = 0, t 6= 0and Z b

a

δ(t) dt =n1, a ≤ 0 ≤ b0, otherwise

Hence, δ(t− t0) = 0, t 6= t0

It has the property that

Z b

a

δ(t − t0) dt =n1, for a ≤ t0 ≤ b0, otherwise

(1.6)

Which implies that δ(t) has infinite magnitude at t = t0.

In addition, for any function f(t)

Z b

a

f (t)δ(t− t0)dt =nf (t0), for a ≤ t0 ≤ b0, otherwise (1.7)

δ(t) is transformable for every s, since by Eq. (1.7)

Z ∞

0

e−stδ(t)dt = 1 (1.8)

Hence,L[δ(t)] = 1 (1.9)

Example 1.2 Unit Step Function

Define

u(t) =n1, t ≥ 00, t < 0

The graph of the function is shown in Fig. 1.3

Then,

U(s) =

Z ∞

0

1.e−st dt = −1se−st

¯∞0

=1

s

3

Page 4: Linear Transforms

Hence,

L [u(t)] = 1s, Re{s} > 0 (1.10)

Example 1.3 Unit Ramp Function

The function is shown in Fig. 1.4 and defined as

f (t) = t, for t ≥ 0

Then,

F (s) =

Z ∞

0

te−st dt

= −∙te−st

s+

e−st

s2

¸∞0

(1.11)

=1

s2, Re{s} > 0 (1.12)

Therefore, L [f(t)] = 1s2

Example 1.4 Exponential Function

The graph of the function is shown in Fig. 1.5.

Let,f(t) = e−at , for t ≥ 0

F (s) =

Z ∞

0

e−ate−st dt (1.13)

=

Z ∞

0

e−(s+a)t dt

= − 1

s + ae−(s+a)t

¯∞0

=1

s + a, Re{s} > −a (1.14)

Example 1.5 Sinusoidal Function

The function f (t) = sin(at) can be written as

f (t) = sin(at) =ejat − e−jat

2j

Then,

F(s) =1

2j

∙Z ∞

0

(ejat − e−jat)e−st dt

¸(1.15)

=1

2j

∙Z ∞

0

e−(−ja+s)t dt−Z ∞

0

e−(ja+s)t dt

¸=1

2j

∙1

s− ja− 1

s + ja

¸=

a

s2 + a2, Re{s} > 0

4

Page 5: Linear Transforms
Page 6: Linear Transforms

Example 1.6 Find the Laplace transform of f(t) if

f (t) =n4, 0 ≤ t < 20, t ≥ 2

By definition

L[f(t)] =Z ∞0

f (t)e−st dt

=

Z 2

0

e−st dt+

Z ∞

0

0.e−st dt (1.16)

=

¯4e−st

−s

¯20

=4

s[1− e−2s] , ∀ s

Example 1.7

Let f(t) = t2 Then integrating by parts, we have

F(s) =

Z ∞0

t2e−st dt

= −1se−stt2

¯∞0

− 2

s2e−stt

¯∞0

− 2

S3e−st

¯∞0

(1.17)

=2

s3, Re{s} > 0

The reader should verify thatlimt→∞

t2e−st = 0

1.5 LAPLACE TRANSFORM OF DERIVATIVES

Let us now proceed to find the Laplace transform of derivatives. Define, L[y(t)] = Y (s). Wewish to find

Ly(t) =Z ∞

0

y(t)e−st dt (1.18)

(In this text, we will frequently use y to represent dydt.) We reduce this equation by integrating

by parts

L{y(t)} = y(t)e−st|∞0 −Z ∞

0

y(t).(−s.e−st)dt

= − y(0) + sY (s) (1.19)

The only restriction is that y(t) must be such that

6

Page 7: Linear Transforms

limt→∞

y(t)e−st = 0

The Laplace transforms of higher derivatives are easily deduced from Eq. (1.19) by letting

y(t) =dz

dt(1.20)

Thus

d2z

dt2=

Z ∞

0

d2z

dt2e−st dt = sL(dz

dt) − dz

dt(0) (1.21)

Also L( dzdt) = sZ(s) − z(0)

Combining the above equations

L∙d2z

dt2

¸= s2Z(s)− sz(0)− dz

dt(0) (1.22)

Some of such results are given in Table 1.2. Other properties are the subject of Chapter IV.

1.6 LAPLACE TRANSFORM APPLIED TO ORDINARY DIFFERENTIAL EQUATIONS

Example 1.8 Consider the circuit of Fig. 1.6

The equation describing the electric current i(t) is

L0

di

dt+ Ri = E (1.23)

where L0 is the inductance, R is the resistance and E is a constant voltage source.

Transforming the equation (1.23)

7

Page 8: Linear Transforms

L∙L0

di

dt+ Ri

¸= L[E ]

L0L∙di

dt

¸+ RL[i] = L[E] (1.24)

L0 [sI(s) − i(0)] +RI(s) =E

s(1.25)

If we assume that initial current i(0) in the inductor is zero, then

L0sI (s) +RI(s) =E

s(1.26)

or

(sL0 +R)I(s) =E

s

I(s) =E

s(sL0 +R)

=E

L0

s(s + R

L0)

separating by partial fraction method

I(s) =E

R

"1

s− 1

s+ R

L0

#(1.27)

Inverting I(s) into time domain, we get

i(t) =E

R(1− e−R/L0t), t ≥ 0

Example 1.9 Consider the differential equation

d2y

dt2+ 4

dy

dt+3y = sin(t)

L∙d2y

dt2+4

dy

dt+ 3y

¸= L[sin t] (1.28)

[s2Y (s) − sy(0) − dy

dt(0) + 4sY (s) − 4y(0) + 3Y (s)] = 1

s2 +1

If y(0) = 0 and dydt(0) = 0, then

8

Page 9: Linear Transforms

Y (s) =1

(s2 + 1)(s2 + 4s + 3)

=1

(s+ 1)(s + 3)(s2 + 1)(1.29)

The solution of Eq. (1.31) in the time domain can be obtained and is given below:

y(t) =1

4e−t − 1

20e−3t − 1√

20cos(t+ 0.4636), t ≥ 0 (1.30)

9

Page 10: Linear Transforms
Page 11: Linear Transforms

PROBLEMS

1.1 Find the Laplace transforms of y(t) in the following differential equations.

a) d2y

dt2− dy

dt− 2y = 0, y(0) = 1, dy

dt(0) = 2

b) d4y

dt4− 2d2y

dt2+ y = 0, d3y

dt3(0) = 0, d2y

dt2(0) = 1, dy

dt(0) = 0, y(0) = 1

c) d3y

dt3+ 7d

2y

dt2+ 12dy

dt= (1 + t)e−3t, d2y

dt2(0) = 0, dy

dt(0) = 0, y(0) = 1

1.2 Find the Laplace transforms of the following functions of time.

a) f(t) = tn

b) f(t) = sin2(ωt)

c) f (t) = te−at

d) f(t) = e−at sin(ωt)

e) f (t) = t2 sin(t)

11

Page 12: Linear Transforms

CHAPTER - II

APPLICATIONS TO PHYSICAL SYSTEMS

2.1 INTRODUCTION

The aim of this chapter is to give an appreciation of the equations of linear physical systemsand their formulation in the Laplace domain. These transformed equations can then beanalyzed by the s-plane analysis techniques discussed later.

2.2 MECHANICAL SYSTEMS

The differential equations for mechanical systems are written by using Newton’s Law whichstates that for a translational system the sum of the forces acting on a body is equal to themass times the linear acceleration of the body. If the forces on the body are balanced, i.e.the force in the positive direction is equal to the force in negative direction, the mass willnot move. In this case, the resultant of the forces is zero, hence the acceleration is zero. Forrotational systems, the law states that the sum of torques acting on a body is equal to themoment of inertia times the angular acceleration of the body.

Example 2.1 Translational System

Consider the translational mechanical system in Fig. 2.1 whose differential equation forapplied force F(t) is:

Md2y

dt2= −kx− f

dx

dt+ F (t) (2.1)

or

Md2x

dt2+ f

dx

dt+ kx = F (t) (2.2)

Where M is the mass of body, K the stiffness coefficient, f the friction coefficient, and xdenotes displacement.

Transforming Eq. (2.2) and assuming that all initial conditions are zero and F(t) is a unitstep function, we obtain

s2MX(s) + sfX(s) + kX(s) =1

s

so that

X (s) =1

s(Ms2 + f s + k)(2.3)

Example 2.2 Linear Rotational System

Consider the rotational system of Fig. 2.2 By Newton’s Law:

Jd2θ

dt2= −Bdθ

dt+ T − kθ (2.4)

Jd2θ

dt2+B

dt+ kθ = T

12

Page 13: Linear Transforms

Where J is the moment of inertia, B the friction coefficient, T the applied torque, and θ theangular displacement.

Transforming Eq. (2.4) and assuming that θ(0) = 0, θ(0) = 0 , we obtain

θ(s) =T (s)

Js2 +Bs + k(2.5)

Example 2.3 Coupled Translational System

Consider Fig. 2.3 whose differential equation is given below:

M1

d2x1dt2

+ (k1 + k2)x1 − k2x2 + fdx1dt

= 0

M2

d2x2dt2

+ k2x2 − k2x1 = F (t)

TransformingM1s

2X1(s) + (k1 + k2)X1(s)− k2X2(s) = 0

M2s2X2(s) + k2X2(s)− k2X1(s) = F (s) (2.6)

Eq. (2.6) can now be written in matrix form as follows:

∙M1s

2 + f s + k1 + k2 −k2−k2 M2s

2 + k2

¸ ∙X1(s)X2(s)

¸=

∙0

F (s)

¸(2.7)

2.3 ELECTRIC CIRCUITS

Consider the simple electric circuit shown in Fig. 2.4 which is made up of an inductive coil,a resistor, and a capacitor. A time varying voltage is applied across terminals 1 and 2. Thisexcitation produces the current i(t), and the voltage e2(t) across terminals 3 and 4.

The analysis of electrical circuits is based on two fundamental laws: Kirchoff’s voltage lawand Kirchoff ’s current law. The first law states that the sum of all voltages around anyclosed path of a circuit is zero. The second law states that the sum of all currents enteringa node is zero. Both these laws apply to instantaneous values of voltages and currents. Asthe current i(t) flows through the elements of the circuit, it causes a voltage drop that isopposed to the direction of current flow. The magnitudes of these voltage drops are:

1. across the inductor: eL(t) = Ldi(t)

dt

2. across the resistor: eR(t) = Ri(t)

3. across the capacitor: eC(t) =1C

R t0i(u)du + eC(0)

where the voltage ec(0) is caused by charges already on the capacitor at time t = 0. ApplyingKirchoff’s voltage law to the closed path:

13

Page 14: Linear Transforms
Page 15: Linear Transforms

e1(t) − eL(t) − eR(t) − eC(t) = 0

Here, voltages which oppose the clockwise current flow have a negative sign. Substitution ofthe voltage drops yields

Ldi(t)

dt+ Ri(t) +

1

C

Z t

0

i(u)du + eC(0) = e1(t) (2.8)

This is an integro-differential equation defining the unknown function i(t). However, we wishto solve the system for e2(t). The voltage across terminals 3 and 4 is equal to the voltageacross the capacitor, namely

1

C

Z t

0

i(u) du + eC(0) = e2(t) (2.9)

Equations (2.8) and (2.9) can now be Laplace transformed.

Example 2.4 Consider the passive network (Fig. 2.5 or 2.6) whose equations have to bederived by mesh and nodal methods. The circuits for mesh and nodes are given in Figs. 2.5and 2.6.

The mesh equations for the network can be written as follows:

V (s) =

µR1 +

1

C1s

¶I1(s) −

1

C1sI2(s)

0 = − 1

C1sI1(s) +

µ1

C1s+

1

C2s+ sL1

¶I2(s) −

1

C2sI3(s) (2.10)

0 = − 1

C2sI2(s) +

µR2 +

1

C2s

¶I3(s)

Similarly, a set of nodal equations can be written from Fig. 2.6 as given below:

V (s)

R1

=V1(s)

R1

+ sC1V1(s) +1

sL1

V1(s) −1

sL1

V2(s)

0 = − 1

sL1V1(s) + sC2V2(s) +

1

sL1

V2(s) +1

R2

V2(s) (2.11)

The above set of equations can now be solved for unknown variables.

Example 2.5 An equivalent circuit for an active device is shown in Fig. 2.7. We deriveits mesh equations in Laplace domain.

The following mesh equations for the circuit can easily be derived with the help of Kirchoff ’svoltage law.

Vs(s) = (Rs + Rg)I1(s) −RgI4(s)

15

Page 16: Linear Transforms
Page 17: Linear Transforms

−µVg(s) = (Rp +RL)I2(s)− RLI3(s)

0 = −RLI2(s) + (RL + R2 +1

sC)I3(s) −KR2I4(s) (2.12)

0 = −RgI1(s) −KR2I3(s) +

∙Rg +KR2 +

1

sCgk

¸I4(s)

Also, from Fig. 2.7.

Vg(s) =1

sCgk

I4(s) (2.13)

The matrix equation can be written as follows. The unknown variables can be determinedby using the Cramer’s rule, for example⎡⎢⎢⎣

Rs + Rg 0 0 −Rg

0 Rp + RL −RLµ

SCgk

0 −RL RL +R2 +1

SC−KR2

−Rg 0 −KR2 Rg +K2R +1

SCgk

⎤⎥⎥⎦⎡⎢⎣ I1(s)I2(s)I3(s)I4(s)

⎤⎥⎦ =⎡⎢⎣Vs000

⎤⎥⎦ (2.14)

2.4 A SIMPLE THERMAL SYSTEM

Consider a large bath whose water temperature is at µ degrees and T is its lag constant. Athermometer indicating θ degrees is immersed in the bath. Newton’s Law of Cooling statesthat the rate of change of temperature is proportional to the difference between the bathtemperature and the measured temperature (Fig. 2.8).

The equation is

θ(t) =1

T(µ− θ(t)) (2.15)

Transforming Eq. (2.15)

T [sθ(s) − θ(0)]+ θ(s) =µ

s(2.16)

θ(s) =µ

T

s(s + 1

T )

+θ(0)

s + 1

T

(2.17)

2.5 A SIMPLE HYDRAULIC SYSTEM

Consider the system shown in Fig. 2.9 which consists of a tank of cross sectional area A towhich is attached a flow resistance R such as a pipe. Assume that q0 the volumetric flowrate through the resistance is related to the head h by the linear relationship:

17

Page 18: Linear Transforms
Page 19: Linear Transforms

q0 =h

R(2.18)

Liquid of constant density ρ enters the tank with volumetric flow q(t). Determine the transferfunction which relates head to flow.

The mass balance equation around the tank is:

Mass flow in - mass flow out = rate of accumulation of mass in the tank.

So that,

ρq(t) − ρq0(t) =d

dt(ρAh(t)) (2.19)

q(t) − q0(t) = Adh

dt(2.20)

Combining Eqs. (2.18) and (2.20) to eliminate q0(t) gives

q − h

R= A

dh

dt(2.21)

Assuming initial conditions as zero and transforming Eq. (2.21)

Q(s) =H(s)

R+ AsH(s) (2.22)

Q(s) = (1

R+As)H (s)

H(s) =RQ(s)

1 + τs(2.23)

where, τ = AR

Equations for hydraulic systems with more than one tank can easily be obtained by followingthe above method.

2.6 A MODEL OF A SINGLE COMMODITY MARKET

Fig. 2.10 shows a diagram for the market of a single commodity. Three groups are involved inthe market: the suppliers, the merchants, and the consumers. Each group is aggregated intoone function. The merchant sets the price, purchases from the supplier, sells to the consumer,and maintains a stock of the commodity. The variables of the market are assumed to becontinuous functions of time. They are:

1. The rate of flow of supply, s(t), measured in units of commodity per unit time.

2. The rate of flow of demand, d(t), measured in units of commodity per unit time.

3. The stock level, q(t), measured in units of commodity.

19

Page 20: Linear Transforms
Page 21: Linear Transforms

4. The price per unit of commodity, p(t).

In modelling this market, assumptions need to be made about the relationship of the abovevariables. First of all, any access of supply over demand goes into stock. This means that:

q(t) = q(0) +

Z t

0

[s(u)− d(u)] du (2.24)

An excess of demand over supply is filled from stock. Negative values of q(t) will be inter-preted as the amount of the commodity which has been sold, but not delivered because ofshortage. The second assumption is that the merchant sets the price p(t) at each instantof time, so as to make the rate of increase of p(t) proportional to the amount by which theactual stock q(t) deviates from an ideal stock level qi . Mathematically,

p(t) = A[qi − q(t)] (2.25)

Where A is a positive constant. The third assumption is that both supply and demanddepend on the price. In the simplest model, the supply s(t) increases linearly with price andthe demand d(t) decreases linearly with price. Thus, the excess of supply over demand islinearly increasing function of p,

s(t) − d(t) = B[p(t) − Pe] (2.26)

Where B is a positive constant and Pe is the equilibrium price for which supply equalsdemand. This function is shown in Fig.2.11. A more elaborate model considers the antici-patory nature of the supplier as well as the consumer reaction. This provides for an increasein supply or demand proportional to the rate of change of price. With such a term added,Eq. (2.26) becomes:

s(t)− d(t) = B[p(t)− Pe] +Cp(t) (2.27)

The value of C may be positive or negative. For example, if it is assumed that the supplierincreases his production because prices are on the rise and because he wishes to benefit froma greater profit margin, then C > 0. On the other hand, the consumer may purchase morethan he needs because prices are rising and because he wishes to buy before the prices are toohigh, then C < 0. If both of these effects are present, C may be either positive or negative.

Substituting Eq. (2.24) in Eq. (2.25) and differentiating the resulting equation, we obtain

d2pt

dt2= −A[s(t)− d(t)] (2.28)

Substitution of Eq. (2.27) yields the following second order differential equation

d2p(t)

dt2+AC

dp(t)

dt+ ABp(t) = ABPe (2.29)

which can be Laplace transformed.

21

Page 22: Linear Transforms

2.7 A MODEL FOR CAR FOLLOWING

Consider the position of two cars moving on a single lane road, where passing is not possible.The leading car has the position x(t), and the second car has the position y(t). Both x(t) andy(t) are measured from the same reference point and both increase with time. It follows thatx(t) > y(t), and that x(t) = y(t) implies the collision of the cars. Now let x(t) be an arbitraryfunction of time. Assume that the initial distance is x(0)−y(0) = d and that the driver of thesecond car attempts to hold the distance d at all times. If the distance increases (decreases),the driver will accelerate (decelerate) in proportion to the deviation from d. Moreover, theleading car has a higher (lower) speed, the driver will accelerate(decelerate) in proportion to

the difference in speed. The actual accelerationd2y(t)dt2

is the result of both effects, that is

d2y(t)

dt2= A[x(t)− y(t) − d] +B

∙dx(t)

dt− dy(t)

dt

¸(2.30)

Where A and B are positive constants. Separation of x(t) and y(t) in this equation yields:

d2y(t)

dt2+ B

dy(t)

dt+ Ay(t) = Ax(t) + B

dx(t)

dt− Ad (2.31)

Define,f(t) = Ax(t) + Bx(t)−Ad

Now Eq. (2.31) is a typical second order differential equation.

PROBLEMS

2.1 Write the Laplace transformed differential equations for the three mechanical systems inFig. 2.12.

2.2 Write the Laplace transformed differential equation for the electric circuit in Fig. 2.13.Find the transfer function E2(s)/E1(s).

2.3 For the circuit in Fig. 2.14(a), let the circuit be initially inert and let

e(t) =nt, for 0 < t, 10, for t ≥ 1

Find the charge q(t) on the capacitor.

2.4 For the circuit in Fig. 2.14 (b), let e(t) = Cos(ωt). Find ic(t), the current through thecapacitor, for the initially inert circuit and write the equation in Laplace transform.

2.5 In the circuit shown in Fig. 2.14(c), the switch is closed at t = 0. Find iL(t) and writethe Lalace transformed equations.

22

Page 23: Linear Transforms
Page 24: Linear Transforms
Page 25: Linear Transforms

CHAPTER - III

THE INVERSE LAPLACE TRANSFORM

3.1 INTRODUCTION

In the Laplace transform method, the behaviour of a physical system is determined viafollowing steps:

1. Derive the ordinary differential equations describing the system.

2. Obtain the initial conditions.

3. Laplace transform the differential equations including the initial conditions.

4. Manipulate the algebraic equations and solve for the desired dependent variable.

5. Find the inverse Laplace transform to obtain the solution in time domain.

We have, so far, reached the stage where we could obtain the desired variables. In thischapter, we consider the problem (step 5, above) of finding the original function f (t) fromits image function F (s). In order that the transform calculus be useful, we must requireuniqueness, that is,

L−1[L[f (t)]] = f (t)

The uniqueness means that if two functions f(t) and g(t) have the same transform F (s),then f (t) and g(t) are identical functions. There is a theorem which states that two functionsf(t) and g(t) that have the same Laplace transform can differ only by a null function n(t).A null function has the propertyZ t

0

n(τ)dτ = 0, for all t > 0.

An example of a null function is

n(t) =n1, for t = 1, 2, 3, ........0, otherwise

Null functions are highly artificial functions and are of no significance in applications. Wecan, therefore, say that the inverse Laplace transform of F (s) is essentially unique. In orderto obtain the solution in time domain, we have to invert the Laplace transform. There arethree methods to obtain the time domain solutions

1. Laplace transform tables

2. Partial fractions

3. Inversion integral

The following discussion is limited to systems which have the following form:

Y (s) =ans

n + an−1sn−1 + . . . + a1s + a0

bmsm + bm−1sm−1 + . . . + b1s + b0(3.1)

The coefficients ai and bi are real and m,n 0, 1, 2, . . . The most obvious method to find theinverse is to use Laplace transform tables, which can take care of a wide range of problems.

25

Page 26: Linear Transforms

3.2 PARTIAL FRACTION METHOD

The rational fraction in Eq. (3.1) can be reduced to a sum of simpler terms each of whoseinverse Laplace transform is available from the tables. As an example, consider the following:

Example 3.1

Consider

Y (s) =s3 + 8s2 + 26s +22

s3 + 7s2 + 14s+ 8(3.2)

Step 1

Eq. (3.2) cannot be expanded into partial fractions because the degree of numerator anddenominator is equal and, therefore, it should be first divided, so that

Y (s) = 1 +s2 + 12s+14

s3 + 7s2 +14s +8= 1+ Y1(s) (3.3)

Now Y1(s) can be expanded into partial fractions.

Step 2

Factor the polynomial in the denominator of Y1(s), so that

s3 + 7s2 + 14s+8 = (s + 1)(s +2)(s+ 4)

Step 3

Y1(s) =s2 + 12s +14

s3 + 7s2 + 14s +8=

A

s+ 1+

B

s + 2+

C

s+4(3.4)

The partial fraction expansion shall be complete when we have evaluated the values of theconstants A, B, and C. To evaluate A, multiply both sides by (s + 1), so that

(s +1)s2 + 12s + 14

(s +1)(s + 2)(s +4)= A+

B(s + 1)

s +2+

C(s + 1)

s + 4(3.5)

Since Eq. (3.5) holds for all values of s we may let s = −1 and solve for A. Then:

A =(−1)2 +12(−1) + 14(−1 + 2)(−1 + 4) =

3

3= 1

Similarly, let s = −2

B =(−2)2 + 12(−2) + 14(−2 + 1)(−2 + 4) =

−6−2 = 3

and, let s = −4

26

Page 27: Linear Transforms

C =(−4)2 +12(−4) + 14(−4 + 1)(−4 + 2) =

−186

= −3

so that

Y1(s) =s2 + 12s+ 14

s3 + 7s2 + 14s+ 8=

1

s+ 1+

3

s + 2− 3

s+ 4(3.6)

and

Y (s) = 1 +1

s +1+

3

s +2− 3

s +4(3.7)

whose inverse transform is (from Tables)

y(t) = δ(t) + e−t + 3e−2t − 3e−4t, t ≥ 0 (3.8)

Example 3.2

Consider another example

Y (s) =s2 + 3s + 1

s(s + 1)3(3.9)

Because of third order pole at s = −1, we write the expansion

Y (s) =s2 +3s + 1

s(s +1)3=

A

s+

B1

s + 1+

B2

(s +1)2+

B3

(s +1)3(3.10)

The two constants A and B3 can be determined as before

A =s(s2 +3s+ 1)

s(s + 1)3

¯s=0

= 1

B3 =(s +1)3(s2 + 3s +1)

s(s + 1)3

¯s=−1

= 1

The other coefficients are found by differentiating. To find B2 , we differentiate once

d

ds

h(s + 1)

3Y (s)

i=

d

ds

"(s + 1)

3(s2 +3s + 1)

s(s+ 1)3

#

=d

ds

∙s2 + 3s +1

s

¸=

s(2s + 3)− (s2 + 3s+1)s2

¯s=−1

=s2 − 1s2

¯s=−1

= 0

27

Page 28: Linear Transforms

and ∙A

d

ds

∙(s + 1)3

s

¸+ B1

d

ds(s + 1)

2+ B2

d

ds(s+ 1) +

d

dsB3

¸s=−1

= B2

so that, B2 = 0

For B1,d2

ds2

h(s + 1)3Y (s)

i=

d

ds

∙s2 − 1s2

¸s2(2s) − (s2 − 1)2s

s4

¯s=−1

= −2

And ∙A

d2

ds2

∙(s +1)3

s

¸+B1

d2

ds2[(s +1)2] + B2

d2

ds2(s + 1)+

d2

ds2B3

¸s=−1

= 2B1

so that B1 = −1

Therefore

Y (s) =1

s− 1

s+ 1+

1

(s + 1)3

(3.11)

and from transform tables

y(t) = 1− e−t +t2

2e−t, t ≥ 0 (3.12)

Example 3.3

Consider another system with complex poles

Y (s) =s+1

(s + 2){(s +2)2 +22)} (3.13)

=A

s+ 2+

Bs+ C

(s + 2)2 + 4

The first constant A is easily found

A =s + 1

(s +2)2 + 4

¯s=−2

=−14

To evaluate B and C , we follow the method outlined in Eqs. (3.16-3.20)

In that notation,

Y1(s) =s+ 1

s+ 2(3.14)

α = 2 and ω = 2

Using Eq. (3.20)

28

Page 29: Linear Transforms

−Re∙1

j2Y1(−2 − j2)(s +2 − j2)

¸= −Re

∙1

j2

−2− j2 + 1

−2− j2 + 2(s + 2− j2)

¸=1

4(s+ 6) (3.15)

Thus

Y (s) = − 1

4(s + 2)+1

4

s + 6

(s +2)2+ 22

This function is inverted directly from the tables as follows

y(t) = L−1[Y (s)] = L−1∙ −14(s + 2)

¸+1

4L−1

∙s + 2

(s +2)2+22

¸+1

4L−1

∙4

(s + 2)2+ 22

¸

y(t) = −14e−2t +

1

4e−t cos(2t) +

1

2e−2t sin(2t), t ≥ 0

Method : Let

Y (s) =Y1(s)

(s+ α)2+ω2

(3.16)

where Y1(s) is a rational fraction which contains the remaining terms in Y (s).

Y (s) =Y1(s)

(s +α)2 + ω2=

A

s + (α+ jω)+

B

s+ (α− jω)+ . . . (3.17)

The first two terms can be combined as

A[s + (α − jω)] +B[s + (α+ jω)]

(s + α)2+ ω2

(3.18)

But since Y (s) has real coefficients, A and B must be complex conjugate of each other andtheir sum must be real. In fact, the sum of a complex number (x + jy) and its conjugate(x− jy) is twice the real part 2x = 2Re(x+ jy). Since A[s+ (α− jω)] is complex conjugateof B[s+ (α + jω)] we can write

Y (s) =2ReA[s + (α− jω)]

(s+ α)2+ω2

(3.19)

Evaluating the coefficient A,

A =(s + (α+ jω))Y1(s)

(s+ α)2+ ω2

¯s=−α−jω

=Y1(−α− jω)

−α − jω +α − jω=

Y1(−α − jω)

−2jω

29

Page 30: Linear Transforms

so that

Y (s) =−Re

h1jωY1(−α − jω)[s +α − jω]

i(s+ α)2 +ω2

(3.20)

Example 3.4

Find the inverse transform of

Y (s) = − s2 + s− 10(s +1)2(s2 + 22)

(3.21)

Y (s) is first expanded into partial fractions

−(s2 + s− 10)(s+ 1)2(s2 + 22)

=A

(s+1)2+

B

s + 1+

Cs+D

s2 + 22(3.22)

The coefficients are then evaluated as follows

A = − (s2 + s− 10)s2 +22

¯s=−1

= 2

The second coefficient at s = −1

B =d

ds

∙−s2 − s +10

s2 + 22

¸s=−1

= 1

Now obtain C and D

Y1(s) = −(s2 + s− 10)(s + 1)2

Y (s) =−Re

h1

j2

(−4+j2−10)(s−j2)(j2+1)2

is2 + 22

+ . . .

=−s− 2s2 + 22

+ . . . (3.23)

Y (s) =2

(s + 1)2+

1

s+ 1+−s− 2s2 + 22

(3.24)

y(t) = 2te−t + e−t − cos(2t) − sin(2t), t ≥ 0 (3.25)

Finally, one may use the method familiar from high-school mathematics where one writessimultaneous algebraic equations in the unknown constants by equating coefficients of likepowers of s.

30

Page 31: Linear Transforms

3.3 INVERSION INTEGRAL METHOD

A method of finding the inverse transforms which is distinct from the partial fraction methodis the method of residues.

Associated with each pole of a function of a complex variable is a particular coefficient inthe series expansion of the function around the pole. The method of residues states:

If F (s) is a rational fraction then,

L−1[F (s)] = Σall poles[residues of F (s)est] (3.26)

where the residue at an nth order pole at s = s1 is given by

Rs1 =1

(n− 1)!

∙dn−1

dsn−1(s− s1)

nF (s)est¸s=s1

(3.27)

Example 3.5

Find the inverse Laplace transform of

F (s) =2s2 +3s + 3

(s +1)(s + 2)(s+3)(3.28)

The residues of F(s)est at s = −1 is

R−1 =

∙(2s2 + 3s +3)est

(s +2)(s + 3)

¸s=−1

= e−t

R−2 =

∙(2s2 +3s + 3)est

(s + 1)(s +3)

¸s=−2

= −5e−2t

R−3 =

∙(2s2 + 3s + 3)est

(s+ 1)(s + 2)

¸s=−3

= 6e−3t

f(t) = L−1[F (s)] = e−t − 5e−2t + 6e−3t

Example 3.6

Find the inverse Laplace transform of

F (s) =1

(s + α)4(3.29)

L−1∙

1

(s+ α)4

¸=1

3!

∙d3

ds3est¸s=−α

=t

3!

d2

ds2(est)

¯s=−α

=t2

3!

d

dsest¯s=−α

f(t) =t3

3!est¯s=−α

=t3

3!e−αt (3.30)

31

Page 32: Linear Transforms

3.4 POLES AND ZEROS

A rational algebraic polynomial is one whose numerator and denominator can be factorizedas below:

(s− z1)(s− z2) . . . (s− zn)

(s− p1)(s− p2) . . . (s− pn)(3.31)

Where the factors in the numerator are called zeros of the function and the factors in thedenominator are called poles of the function.

Consider

Y (s) =s2 + 2s + 1

(s +1)(s + 2)2(s +3)(3.32)

We have previously observed that y(t) shall have terms like e−t, e−2t, e−3t , which dependupon the nature of the poles.

Consider(s +1)

s(s + 2)(s +3)

The location of the poles gives an idea of the nature of the behaviour of the solution. Thesystem-response component due to a pole in the left-half plane dies out after sometime. Apole on the imaginary-axis (non-zero) gives an oscillatory response component. A pole inthe right-half plane results in an ever increasing response. The contribution of zero towardsthe solution is in the form of amplitude and phase shift.

3.5 THE SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS WITH

CONSTANT COEFFICIENTS

Example 3.7

Consider the differential equation

d2x

dt2+3

dx

dt+ 2x = 0

x(0) = −2, x(0) = 1 (3.33)

Transforming

s2X (s) − sx(0)− dx

dt(0) + 3sX(s) − 3x(0) + 2X (s) = 0

s2X(s) + 2s− 1 + 3sX (s) + 6 + 2X(s) = 0 (3.34)

32

Page 33: Linear Transforms

(s2 +3s+ 2)X(s) = −(2s +5)

X(s) =−(2s + 5)(s2 + 3s + 2)

or,

X(s) =−(2s + 5)

(s+1)(s + 2)

X (s) =1

s+2− 3

s +1(3.35)

L−1[X(s)] = x(t) = e−2t − 3e−t t ≥ 0 (3.36)

Example 3.8

Find the solution of the following differential equation.

d2x

dt2+ 4x = 0 (3.37)

for t > 0, x(0) = −2 and x(0) = 4

s2X (s) − sx(0)− dx

dt(0) + 4X (s) = 0 (3.38)

s2X(s) + 2s− 4 + 4X(s) = 0 (3.39)

X(s) =−2s + 4s2 + 4

(3.40)

X(s) =−2s+4s2 + 4

=−2ss2 + 4

+4

s2 +4(3.41)

x(t) = −2 cos(2t) + 2 sin(2t) (3.42)

Example 3.9

d2x

dt2+ 4

dx

dt+ 5x = 0 , f or t > 0

x(0) = 0,dx

dt(0) = 2

s2X (s) − sx(0)− dx

dt(0) + 4sX(s) − 4x(o) + 5X (s) = 0

s2X (s) − 2 + 4sX(s) + 5X (s) = 0

X(s) =2

s2 + 4s + 5=

2

(s + 2)2 + 1

x(t) = 2e−2t sin(t) (3.43)

33

Page 34: Linear Transforms

PROBLEMS

3.1 Find the inverse Laplace transform of the following functions by partial fractions

a) 1s(s + a)(s + b)

b) 1s2(s + a)

c) 1s[(s + a)2 + b2]

d) s+ as2(s2 + b2)

e) s2 + 3s+ 1s2 + 5s+ 1

f) s2 + 3s + 1s(s + 1)[(s + 1)2 + 1]

3.2 Find the inverse Laplace Transform using inversion integral for the following

a) s+ 2s(s + b)

b) s +1s(s2 + 1)2(s+ 2)

c) s + as2(s2 + a2)

d) 1(s +1)5

3.3 Find the solution of each of the following differential equations

a)dx

dt+ a1x = a2 + a3t , x(0) = x0

b)dx

dt+ a1x = a2δ(t) , x(0) = x0

3.4 Solve the following equations

a)d2x

dt2+ 5

dx

dt+ 6x = 0 , f or t > 0, x(0) = 0,

dx

dt(0) = 1

b)d2x

dt2+ w2x = 0 , for t > 0, x(0) = a,

dx

dt(0) = b

c)d4x

dt4+6

d3x

dt3+13

d2x

dt2+12

dx

dt+4x = δ(t) , for t > 0

d)d3x

dt3+ 2

d2x

dt2+ 4

dx

dt+ 4x = δ(t), for t > 0

all initial conditions zero for (c) and (d).

3.5 Find the time solution of the following differential equation

d2x

dt2+ 4

dx

dt+ 4x = f(t) , f or t > 0

Where, (i) f(t) = au(t), (ii) f(t) = eatu(t)

3.6 Find the solution of the following integro- differential equations

a)dx

dt+ 4x+ 4

Zxdt = at

b)d2x

dt2+2

dx

dt− x− 2

Zxdt = u(t) dt

where all initial conditions are zero except x(0) = x0

34

Page 35: Linear Transforms

CHAPTER - IV

LAPLACE TRANSFORM THEOREMS

In this chapter we present important theorems which shall help in obtaining the solution ofpractical problems.

4.1 LINEARITY

Theorem 4.1: If a and b are constants and f(t) and g(t) are transformable func-tions, then

L[af (t) + bg(t)] = aL[f (t)] + bL[g(t)] (4.1)

This theorem defines the linearity of Laplace transformation operation.

Proof: Z ∞

0

[af(t) + bg(t)]e−stdt

= a

Z ∞

0

f (t)e−stdt+ b

Z ∞

0

g(t)e−stdt

= aF (s) + bG(s) (4.2)

Example 4.1

L[3t+ 5] = L[3t] +L[5]

=3

s2+5

s=5s+ 3

s2(4.3)

Example 4.2

L[sinh(bt)] = L[ebt − e−bt]

2(4.4)

=1

2L[(ebt)− (e−bt)]

=1

2

1

(s− b)− 12

1

(s+ b)

=b

s2 − b2(4.5)

4.2 COMPLEX TRANSLATION

Theorem 4.2: If f (t) is transformable with transform F (s) then eatf(t) is alsotransformable, where a is a real constant, and has a transform F (s − a) andconversely.

Proof: By definition

F (s) =

Z ∞

0

f (t)e−stdt , Re{s} > c (say) (4.6)

35

Page 36: Linear Transforms

so that by comparisonZ ∞

0

eatf(t)e−stdt (4.7)

=

Z ∞

0

e−(s−a)tf (t)dt = F (s− a) , Re{s} > c+ a

Example 4.3

L[cos(wt)] = s

s2 + ω2

From the theorem

L[e−at coswt] = s + a

(s+ a)2 +ω2

Example 4.4

L[u(t)] = 1s

From the theorem

L[ebt] = 1

s− b

Example 4.5

L[t] = 1

s2(4.8)

From the theorem

L[te−at] = 1

(s + a)2

4.3 REAL TRANSLATION

Theorem 4.3 a: Real translation Right: If f(t) is a transformable function withtransform F (s) and a is a non negative real number then f(t− a) more correctly,f(t−a)u(t−a) is a transformable funtion with transform e−asF (s) and conversely.

This theorem states that translation in the real domain corresponds to multiplication by anexponential in the transform domain.

Proof: Let g(t) = f (t− a)u(t− a) a > 0

then

G(s) =

Z ∞

0

e−stg(t) dt (4.9)

=

Z ∞

0

f (t− a)u(t− a) dt

36

Page 37: Linear Transforms

Now let, v = t− a

G(s) =

Z ∞

0

e−s(v+a)f(v)u(v) dv

= e−asZ ∞

0

e−svf(v)dv (4.10)

= e−as F (s)

Example 4.6f(t) = u(t− a) , a > 0

Now

L[u(t)] = 1s

From theorem

L[u(t− a)] =e−as

s

L[f (t)] = e−as

s(4.11)

Theorem 4.3b: Real translation Left: If f(t) is a transformable function withtransform F (s) and a is real non negative for which f(t + a) = 0 for t < 0 , thenf(t + a) is transformable with transform easF (s).

4.4 REAL DIFFERENTIATION

Theorem 4.4: If f(t) and its derivative df(t)

dtare both transformable functions,

then their transforms are related by the equation.

L∙d

dtf(t)

¸= sL[f(t)]− f(0) (4.12)

Hence, differentiation in the time domain corresponds to multiplication by s in the s domain,

Proof:

L∙df(t)

dt

¸=

Z ∞

0

e−stdf

dtdt (4.13)

Integrating by parts

= e−stf (t)|∞0 −Z ∞

0

−se−stf (t)dt (4.14)

Since f (t) is transformable, first expression on left above vanishes at the upper limit and weobtain

L∙df

dt

¸= −f (0) + sF (s) (4.15)

37

Page 38: Linear Transforms

Example 4.7

Letf (t) = eat for t > 0

df

dt= aeat for t > 0

f(0) = 1

then

L∙df

dt

¸= sL(eat)− 1

=s

s− a− 1 (4.16)

Theorem 4.4 can be extended to higher order derivatives. For example, if

Ldgdt= sL[g(t)]− g(0)

and

g(t) =df(t)

dt

We obtain

L∙d2f

dt2

¸= sLdf

dt− df

dt

¯t=0

(4.17)

= s[sF (s)]− sf (0)− df

dt

¯t=0

= s2F (s) − sf (0)− df

dt

¯t=0

(4.18)

So that, if f(t) and its first derivative exists, and if the nth derivative is transformable, thenkth derivatives are also transformable (k = 0, 1, 2, ..., n− 1) and the following relation holds,where

f (n)(t) =dnf

dtn

L£f (n)(t)

¤= snL [f(t)]− sn−1f (0) − . . . − f(n− 1)(0)

= snL[f (t)]− sn−1f(0)−n−1Xk=1

sn−k−1f (k)(0) (4.19)

38

Page 39: Linear Transforms

4.5 REAL INTEGRATION

Theorem 4.5: Real integration (Definite) If f (t) is transformable, its definite

integralR t0f(u)du is transformable. The transforms are related by the equation,

L∙Z t

0

f(u)du

¸=1

s[F (s)] (4.20)

Proof: Let g(t) =R t

0f (u) du

Note that g(0) = 0, dg

dt= f(t)

From Theorem 4.4

Ldgdt= sLg(t) − g(0)

F (s) = sG(s)

and

G(s) =1

sF (s) (4.21)

This states that integration in the real domain corresponds to division by s in the transformdomain.

Example 4.8

Since

1 − cos ωt = ω

Z t

0

sin(ωu) du (4.22)

it follows from Theorem 4.5 that

L [1− cos ωt] = Lωs[sin(ωu)]

=ω2

s(s2 + ω2)

Also, from

L[1− cos ωt] =1

s− s

ω2 + s2=

ω2

s(s2 + ω2)

Example 4.9

Let f (t) have the transform F (s), then by Theorem 4.5,F (s)s2

is the transform of the function.

g(t) =

Z t

0

∙Z τ

0

f(u) du

¸dτ

Changing the order of integration

g(t) =

Z t

0

f(u)

∙Z t

u

¸du

=

Z t

0

(t − u)f (u) du (4.23)

39

Page 40: Linear Transforms

Further, let f (t) have the transform F (s), then by Theorem 4.5,F (s)s3

is the transform of

the function

b(t) =

Z t

0

g(τ)dτ

=

Z t

0

∙Z τ

0

(τ − u)f(u) du

¸dτ

=

Z t

0

f(u)

Z t

u

[(τ − u)dτ ]du

=

Z t

0

f(u)

∙(τ − u)2

2!

¸tu

du

=

Z t

0

(t − u)2

2!f(u) du (4.24)

If the result of above two examples are tabulated as transform pairs then the extension ofthe above results gives the following table.Transform Function

F(s) f(t)

1sF (s)

R t

0f(u) du

1sF (s)

R t

0f(u) du

......

1

sn+1F (s)

R t

0

(t−u)n

n!f(u) du

Frequently, it is more desirable to deal with indefinite integrals, such as

g(t) =

Z t

−∞f(u) du

or

g(t) =

Z t

−∞f (u)du =

Z t

0

f(u) du+

Z 0

−∞f(u) du

=

Z t

0

f (u) du+ g(0)

L[g(t)] = 1

sL[f (t)] + g(0)

s(4.25)

Summarizing the above, if f(t) is transformable, its integral f (−1)(t) =R t−∞ f (u) du is trans-

formable and the transforms are related by the equation.

L£f (−1)(t)

¤= L

∙Z t

−∞f(u) du

¸=1

sL[f (t)] + f (−1)(0)

s(4.26)

40

Page 41: Linear Transforms

4.6 COMPLEX DIFFERENTIATION

Theorem 4.6: If f(t) is transformable with transform F (s), tf(t) is also trans-formable and has the transform − d

ds[F (s)].

Proof: By Definition.

F (s) =

Z ∞

0

e−stf (t) dt (4.27)

dF

ds=

d

ds

Z ∞

0

e−stf(t) dt

=

Z ∞

0

d

ds(e−st)f (t) dt

=

Z ∞

0

−te−stf (t) dt

=

Z ∞

0

e−st(−tf (t)dt)

or

L[tf (t)] = −dFds

(4.28)

Example 4.10 Using the result of Example 4.2

L[u(t)] = 1s

It follows that

L[t] = − d

ds

∙1

s

¸=1

s2

L[t2] = − d

ds

∙1

s2

¸=2!

s3

and

L[tn] = (−1)n dn

dsn

∙1

s

¸=

n!

sn+1(4.29)

Example 4.11

L(sinwt) = ω

s2 + ω2

L(t sinwt) = − d

ds

∙ω

s2 + ω2

¸=

2ωs

(s2 + ω2)2(4.30)

41

Page 42: Linear Transforms

Example 4.12

L[te−αt] = − d

ds

∙1

s + α

¸=

1

(s + α)2(4.31)

4.7 COMPLEX INTEGRATION

Theorem 4.7: If both f(t) and f(t)

tare transformable functions and the transform

of f (t) is F (s), then the transform of f(t)

tis related to it by the equation.

L1tf (t) =

Z ∞

s

F (ξ)dξ (4.32)

Hence division by t in the real domain is equivalent to integration in the transform domain.

Proof: By definition

F (s) =

Z ∞

0

e−stf (t) dtZ ∞

0

F (ξ)dξ =

Z ∞

s

∙Z ∞

0

e−ξtf(t) dt

¸dξ

Assuming the validity of changing the order of integration

=

Z ∞

0

f(t)

∙Z ∞

s

e−ξtdξ

¸dt

=

Z ∞

0

f(t)

∙e−ξt

−t

¸∞s

dt

=

Z ∞

0

f(t)e−st

tdt = L

∙f (t)

t

¸(4.33)

Example 4.13

L(sinωt) = ω

s2 +ω2

It follows that

L∙sinωt

t

¸=

Z ∞

s

ω

ξ2 + ω2dξ = tan−1

ξ

ω

¯∞s

2− tan−1

s

ω= cot−1

s

ω= tan−1

ω

s(4.34)

4.8 SECOND INDEPENDENT VARIABLE

Suppose for a particular α, f(t, α) is transformable, the transform will also (in general) bea function of that parameter.

42

Page 43: Linear Transforms

That is,

F (s, α) =

Z ∞

0

e−stf (t, α) dt (4.35)

If α is made to vary, then under ‘suitable’ conditions,

limα→α0

F (s, α) = limα→α0

Z ∞

0

e−stf(t, α) dt

=

Z ∞

0

e−st limα→α0

f (t, α)dt

∂αF (s, α) =

∂α

Z ∞

0

e−stf (t, α) dt =

Z ∞

0

e−st∂f(t, α)

∂αdt

and Z α

α0

F (s, α)dα =

Z α

α0

Z ∞0

e−stf(t, α)dt dα

=

Z ∞

0

e−st∙Z α

α0

f(t, α) dα

¸dt (4.36)

Hence the following theorem

Theorem 4.8: If f (t, α) is transformable function with respect to t with α a secondindependent variable, then under suitable conditions the following relation hold.

limα0→α

L[f(t, α)] = L∙limα→α0

f (t, α)

¸∂

∂α[Lf(t, α)] = L

∙∂

∂αf(t, α)

¸Z α1

α0

L [f(t, α)] dα = L∙Z α1

α0

f(t, α)dα

¸(4.37)

Example 4.14

Since,

L[e−αt] = 1

s+ α

it follows from Theorem 4.8 that

L[u(t)] = L [limα→0e−αt]

= limα→0

1

s + α=1

s(refer to Example 1.2) (4.38)

L[te−αt] = L∙−∂∂α

e−αt¸

= − ∂

∂α.1

s+ α=

1

(s + α)2(4.39)

43

Page 44: Linear Transforms

L∙1− e−αt

t

¸= L

∙Z α

0

e−βt dβ

¸= −

Z α

0

1

s + βdβ

= log(s + β)|α0 = log

∙s+ α

s

¸(4.40)

4.9 PERIODIC FUNCTIONS

A function f(t) is periodic of period T if

f(t+ T ) = f(t) (4.41)

for every t.

Since,f (t+ 2T) = f(t+ T + T )

= f(t+ T ) = f (t)

it follows that if f(t) has period T ,

f(t+ kT) = f(t) (4.42)

Theorem 4.9: If f(t) is transformable periodic function of period T , then itstransform may be found by integration over the first period according to theformula.

L[f(t)] =R T0e−stf(t)dt

1− e−st(4.43)

Proof:Z ∞

0

e−stf (t)dt =

Z T

0

e−stf(t)dt+

Z 2T

0

e−stf(t)dt+ . . . +

Z kT

(k−1)Te−stf (t)dt + . . .

=

k=∞Xk=0

Z (k+1)T

kT

e−stf (t)dt

Making the change of variable

t = u+ kT, dt = du

so that Z ∞

0

e−stf(t)dt =

∞Xk=0

Z−0Te−s(u+kt)f(u + kt)du (4.44)

=

Z T

0

e−suf(u) du

̰Xk=0

e−skT

!

=

Z T

0

e−suf (u) du

1 − e−st(4.45)

44

Page 45: Linear Transforms

Example 4.15

Consider the periodic function of Fig 4.1 Let f (t) be the square wave

f (t) =n

1 for 2n≤ t < 2n+ 10 for 2n+ 1 ≤ t < 2n+ 2 n= 0, 1, 2, . . .

L[f (t)] =Z 2

0

f(t)e−st

1 − e−2sdt

=

Z 1

0

e−st

1 − e−2sdt

=− 1

se−st

1 − e−2s

¯10

=(1− e−s)

s(1− e−s)(1 + e−s)

=1

s(1 + e−s)(4.46)

45

Page 46: Linear Transforms

Example 4.16

Consider the waveform of Fig. 4.2 and transform it into Laplace domain.

f (t) = u(t) − 2u(t − 1) + u(t− 2)

L[f (t)] = 1

s− 2e

−s

s+

e−2s

s

=(1 − e−s)2

s

Define the square wave as f1(t) =P∞

k=0f (t−2k) with a period of T = 2, so that by

Theorem 4.9

L[f1(t)] = F1(s) =(1− e−s)2

s(1− e−2s)2

Example 4.17

The function f(t) = |sint| is a rectified sine wave and is periodic with period T = π. Asingle period of f(t) corresponds to a half period of the function sint.

Using the symmetry of sint, we obtain the first half period by shifting sint to the right by πunits and by adding the result to sint; so that

f1(t) = sint+ sin(t− π)u(t− π)

Then

Lf1(t) = F (s) =1 + e−πs

s2 +1

and

L|sin t| = 1+ e−πs

(s2 + 1)(1− e−πs)

4.10 CHANGE OF SCALE

Theorem 4.10: If the function f (t) is transformable and L[f (t)] = F (s) and a ispositive constant (or variable) independent of t and s, then

L∙f(

t

a)

¸= aF (as) (4.47)

Hence the division of the variable by a constant in the real domain results in multiplicationof both the transform F (s) and the transform variable s by the same constant.

We make the change in the variable

τ =t

a

in the integral definition of the transform

F (s) =

Z ∞

0

f (τ)e−τsdτ (4.48)

46

Page 47: Linear Transforms

After changing the variable, we obtain

F (as) =

Z ∞

0

f(t

a)e−asta d(

t

a) (4.49)

Rearranging

aF (as) =

Z ∞

0

f (t

a)e−st dt (4.50)

Example 4.18

Given the transform pair

s +50

(s+ 50)2 + 104= L [e−50t cos 100t]

in which t is in seconds. Suppose we wish to find the transform pairs in which t is measuredin milliseconds. This is accomplished by letting a = 103.

Lhe−50t1000 cos(10−1t)

i= 1000

∙1000s + 50

(1000s +50)2 +104

¸=

s +0.05

(s + .05)2 + 10−2(4.51)

4.11 REAL CONVOLUTION

Suppose f (t) and g(t) have transforms F (s) and G(s) respectively. Is there a function h(t)with transform F (s). G(s) and if so, how is it related to f (t) and g(t)?

Theorem 4.11: If f(t) and g(t) have transforms F (s) and G(s) respectively, theproduct F (s) x G(s) is the transform of

h(t) =

Z t

0

f(v)g(t− v)dv (4.52)

h(t) is termed the real convolution of the functions f(t) and g(t) and is usuallywritten as f(t) ∗ g(t)

h(t) = f (t) ∗ g(t) = g(t) ∗ f (t) (4.53)

Since the convolution is symmetric, This theorem states that the product of two functionsof s is the Laplace transform of the convolution of the two respective functions of time t.

Proof:

F (s)G(s) =

Z ∞

0

e−svf(v) dv

Z ∞

0

e−sug(u) du

=

Z ∞

0

Z ∞

0

e−s(u+v)f (v)g(u)dudv (4.54)

For a fixed v lett = u+ v

dt = du

t = v, at u = 0

t = α, at u = ∞

47

Page 48: Linear Transforms

so that

F (s)G(s) =

Z ∞

0

Z ∞

0

e−stf (v)g(t− v)dtdv

=

Z ∞

0

Z ∞

0

e−stf (v)g(t− v)dvdt, since g(t− v) = 0 for t < v

=

Z ∞

0

e−st∙Z ∞

0

f (v)g(t− v) dv

¸dt

=

Z ∞

0

e−sth(t)dt

where

h(t) =

Z ∞

0

f(v)g(t− v)dv =

Z t

0

f (v)g(t− v)dv, since g(t) = 0 for t < 0 (4.55)

Example 4.19

L[t] = 1

s2

andL[sinωt] = ω

s2 + ω2

h(t) =

Z t

0

(t− u) sinωudu

=

Z t

0

u sinω(t− u)du

has the transformH(s) =

w

s2(s2 + ω2)(4.56)

h(t) =u

ωcos ω(t− u)

¯t0

−Z t

0

cos ω(t− u)

ωdu

=t

ω+

sin ω(t− u)

ω2

¯t0

=t

ω− sin ωt

ω2

H (s) =1

ωs2− 1

ω(s2 + ω2)

H(s) =ω

s2(s2 + ω2)(4.57)

Example 4.20

We are given the transform

F (s) =s

(s + a)(s2 +1)

and are to find the inverse transform f(t). From the known pairs

1

s+ a= L[e−at] ; s

s2 + 1= L[cos t]

48

Page 49: Linear Transforms

We deduce that

f(t) = e−at ∗ cos t =Z t

0

e−a(t−u) cos u du

= e−atZ t

0

eau cos u du

=1

a2 + 1[a cos t+ sin t− ae−at] (4.58)

4.12 FINAL VALUE THEOREM

Theorem 4.12: If f (t) and its derivatives are transformable, and if f (t) has thetransforms F(s) and if all the singularities of F (s) are in the left half plane, then

lims→0

sF (s) = limt→∞

f(t)

This theorem states that the behaviour of F (s) near the origin of the s-plane corresponds tothe behaviour of f(t) for large t(t→ ∞).

Proof:

lims→0

F (s) =

Z ∞

0

lims→0

e−stdf

dtdt+ f(0) , (See Eq. (4.13))

=

Z ∞

0

df

dt+ f(0)

= f(t)|∞0+ f(0)

= limt→∞

f (t) − f(0) + f (0)

lims→∞

sF (s) = limt→∞

f(t) (4.59)

4.13 INITIAL VALUE THEOREM

Theorem 4.13: This theorem states that the behaviour of sF (s) near the pointat infinity in the s-plane corresponds to the behaviour of f (t) near t = 0.

Proof:

sF (s) =

Z ∞

0

e−stdf

dtdt+ f(0) (4.60)

lims→∞

sF (s) = lims→∞

Z ∞0

e−stdf

dtdt+ f(0)

=

Z ∞

0

lims→∞

e−stdf

dtdt + f (0)

lims→∞

sF (s) = f(0) (4.61)

Example 4.21

Let

F (s) =1

s(s+α)

49

Page 50: Linear Transforms

sF (s) =1

s+α

lims→0

sF (s) =1

α= lim

t→∞f(t) (4.62)

Example 4.22

F (s) =ω

s2 + ω2

sF (s) =ωs

s2 + ω2

This does not satisfy the condition of Theorem 4.12 since its singularities lie on the imaginaryaxis.

4.14 EXTENSION TO COMPLEX FUNCTIONS

Theorem 4.14: If f (t) is a complex valued transformable function,then its realand imaginary parts are transformable and the operation of transforming andtaking real and imaginary parts is commutative.

It follows from linearity of functions that,

L[Ref(t)] = ReL[f(t)]L[Imf(t)] = ImL[f(t)] (4.63)

Example 4.23:

Consider the following transform pair

L[ejωt] = 1

s− jω=

s + jω

s2 + ω2, ω real

But by theorem 4.14L[ejωt] = L[cos ωt+ j sinωt]

= L[cos ωt] + j(L[sin ωt])

sinceL[cos ωt] = s

s2 + ω2

L[sinωt] = ω

s2 + ω2

Therefore,

L[ejωt] = s

s2 + ω2+

s2 + ω2(4.64)

50

Page 51: Linear Transforms

PROBLEMS

4.1 Use Theorem 4.1 to find the Laplace transform of the functions a + b + ct2

4.2 From the knowledge that L[3t+ 5] = 5s+3

s2, use Theorem 4.2 to find L [(3t+ 5)e−t]

4.3 Find the Laplace transform of the ramp function translated a units (a > 0) to the right.(t− a)u(t− a)

4.4 Since L[cos ωt] = ss2+ω2

, use Theorem 4.6 to derive L[t cos ωt]

4.5 Use Theorem 4.2 and 4.6 to obtain L [te−at sin ωt] from L[sinωt] = ω

s2+ω2

4.6 Starting with the transform pair

L−1∙

1

s[(s+ α)2 + β2]

¸=

1

α2 + β2+

e−αt

β√α2 + β2

sin

µβt− tan−1

−βα

Use Theorem 4.8 to derive the following

L−1∙

1

s(s+α)2

¸

4.7 Starting with the transform pair

L−1 1

s +α= e−αt

Use Theorem 4.8 to derive the following

a) L−1[ 1s]

b) L−1h

1

(s+α)2

ic) L−1[ 1

sn]

4.8 Use Theorem 4.9 to find the Laplace transforms of the function shown in Fig. 4.3

51

Page 52: Linear Transforms

4.9 Use convolution integral to find the following transform

a) L−1∙1

s.1

s + α

¸=

Z t

0

u(τ )e−α(t−τ ) dτ

b) L−1∙

1

s+ jβ.

1

s− jβ

¸=

Z t

0

e−jβτe+jβ(t−τ ) dτ

4.10 Assuming all poles lie to the left of the imaginary axis, find the final and initial valuesof the time function whose transform is

F (s) =k(s + d)

s3 + a2s2 + a1s + a0

4.11 If L[f(t)] = F(s) find the transform of the following functions

a) f (t)u(t− α)

b) f (t)[u(t− α) − u(t− β)]

Assume that α and β are positive real numbers

4.12 Find the inverse Laplace transform of the following functions

a) (1− e−s)2

s

b) 1− e−s

s2(1 + e−s)

4.13 The Laplace transform of the output voltage of an amplifier is given by

L[e0(t)] =a3s3 + a2s2 + a1s+ a0s2(s + α)[(s + α)2 + β2]

The input is a unit impulse. All constants(a0, a1, a2, a3, α, β) are positive and real.Without carrying out the inverse transform find:

a) The form of each term

b) The initial value of e0

c) The initial values of the first two derivatives of e0

d) The final value of e0

52

Page 53: Linear Transforms

CHAPTER - VS-PLANE ANALYSIS

5.1 INTRODUCTION

The Laplace transform offers a method of finding the solution of linear differential equationswith constant coefficients. The method automatically includes the initial conditions. In thes-plane analysis, many problems can be solved without the labor of inverting the Laplace-transformed equations. The necessary design information can be obtained by locating theroots of the characteristic equation on the s-plane and by observing how these roots vary assome parameter is changed.

5.2 SOLUTION OF A SECOND ORDER SYSTEM

Md2x

dt2+B

dx

dt+Kx = f(t) (5.1)

Let,

ξ = damping ratio

ωn = undamped natural f requency

The linear differential equation can be written as

d2x

dt2+ 2ξωn

dx

dt+ ω2nx = f1(t) =

f(t)

M(5.2)

Comparing the coefficients of Eq. (5.1) and Eq. (5.2).

2ξωn =B

M

ω2n =K

M(5.3)

giving

ωn =

rK

Mand ξ =

B

2√KM

The two parameters ξ and ω are sufficient to describe the second order equation. Assumingx(0) = 0, x(0) = 0, the Laplace transformed equation is

(s2 +2ξωns+ ω2n)X(s) = F1(s) (5.4)

The ratio X (s)/F (s), defined as the transfer function of the system, is

X(s)

F (s)=

1/M

s2 +2ξωns +ω2n(5.5)

53

Page 54: Linear Transforms

For F (s) = 1 (f(t) = δ(t)), the RHS of Eq. (5.6) gives the impulse response of the system.

x(t) =1

Me(−ξωn+jωn

√1−ξ2)t +

1

Me(−ξωn−jωn

√1−ξ2)t

=2

Me−ξωntcos(

p1 − ξ2ωnt), t ≥ 0 (5.6)

The impulse response is plotted in Figs. 5.1 (a)-(c) for ωn = 10; ξ = 0.1, 1, 2.

The characteristic equation is s2+2ξωns+ω2n = 0, and the quantity ξ is the ratio of dampingthat exists in the second order system to critical damping. Critical damping in the secondorder system is defined as the value of damping which produces two equal roots in thecharacteristics equation and separates sub-critical and super critical regions.

For the system of Fig. 5.2 (a), the characteristic equation can be written

s2 +Bs

M+

K

M= 0 (5.7)

and the roots are

si = −B

2M±µ

B2

4M 2− K

M

¶12

(5.8)

Critical damping occurs for the value of B = Bc which makes the term under the radical ofEq. (5.8) equal to zero and is found from the expression .

Fig. 5.1 (a) Impulse Response of the vibration table undamped natural frequency ωn = 10,damping ratio ξ = 0.1 (undamped) Roots of the characteristic equation at s = −1± j9.95

54

Page 55: Linear Transforms

.

Fig. 5.1 (b) Impulse Response of the vibration table ωn = 10, ξ = 1 (critically damped).Double roots at s = −10

Fig. 5.1 (c) Impulse Response of the vibration table ωn = 10, ξ = 2 (over damped). rootsat s = −2.68, −37.32

55

Page 56: Linear Transforms
Page 57: Linear Transforms

B2c

4M 2− K

M= 0 (5.9)

Bc = 2√KM (5.10)

For this value of B, there exists a double real root at − Bc2M. From the definition, we find the

damping ratio to be

ξ =B/2M

Bc/2M=

B

2√KM

(5.11)

The undamped natural frequency is the frequency of oscillation that occurs with zero damp-ing. The roots of Eq. (5.7), if B = 0, are

si = ±r−KM

= ±jr

K

M(5.12)

Then the undamped natural frequency ωn =q

K

MThe roots of the second order equation

are located on the s-plane as shown in Fig. 5.2 (b). For constant ωn, as ξ is varied from 0 to1, these roots move along a semicircle. At ξ = 0, the roots are located on the jω axis andwhen ξ = 1 the roots are on the real axis. The radius is the natural frequency ωn , and thedamping ratio is cosθ = ξ. For ξ > 1 (overdamped), both roots are real.

In higher order systems, where there are more than two roots, the system response is oftendominated by two ‘least damped’ roots and the results of the second order system can beextended to approximate the higher order systems.

5.3 ROUTH HURWITZ STABILITY CRITERION

This method provides a simple and direct means for determining the number of roots ofcharacteristic equation with positive real parts (i.e. roots which lie in the right half s-plane).Although we cannot actually locate the roots, we can determine without factorizing thecharacteristic equation, if any of the roots lie in the right half plane and hence give rise toan unstable system.

The transfer function of a linear system is a ratio of polynomials in s and can be written:

H(s) =N(s)

D(s)=

N (s)

sn + an−1sn−1 + an−2sn−2 + . . . + a1s + a0(5.13)

For a stable system, all the a0s are positive real constants and all powers of s in the denomi-nator polynomial are present. If any power is missing or any coefficient is negative, we knowimmediately that D(s) has roots in the right half plane or on the jω− axis, and the systemis unstable or marginally stable (necessary condition). In this case, it is not necessary tocontinue unless we wish to determine the actual number of roots in the right half plane.

The Routh-Hurwitz method centers about an array which is formed as follows:

57

Page 58: Linear Transforms

sn an an−2 an−4 an−6 . . .

sn−1 an−1 an−3 an−5 an−7 . . .

sn−2 b1 b2 b3 b4 . . . . . .

sn−3 c1 c2 c3 . . . . . .

s1 i1 . . . . . . . . . . . .

s0 j1 . . . . . . . . . . . .

where the constants in the third row are formed by cross multiplying as follows:

b1 =an−1an−2 − anan−3

an−1

b2 =an−1an−2 − anan−5

an−1

b3 =an−1an−6 − anan−7

an−1b4 = . . . . . . . . . . . . (5.15)

We continue the pattern until the remaining b0s are equal to zero. The next row is formedby cross multiplying, using the sn−1 and sn−2 rows. These constants are evaluated as follows:

c1 =b1an−3 − b2an−1

b1

c2 =b1an−5 − b3an−1

b1

c3 =b1an−7 − b4an−1

b1

c4 = . . . . . . . . . . . . (5.16)

This is continued until all remaining c0is are zero. The remainder of rows down to s is formedin a similar fashion. Each of the last two rows contain only one non-zero term.

Having formed the array, we can determine the number of roots in the r.h.p. from the RouthHurwitz Criterion:

The number of roots of the characteristic equation with positive real parts is equal to thenumber of changes of sign of the coefficients in the first column. Hence, if all the terms ofthe column have the same sign, the system is stable.

Example 5.1

Consider the following polynomial

D1(s) = s5 + s4 + 3s3 +9s2 + 16s + 10 (5.17)

58

Page 59: Linear Transforms

We form the Routh Hurwitz array

s5 1 3 16

s4 1 9 10

s3 (1)(3)−(1)(9)1

= −6 (1)(16)−(1)(10)1

= +6

s2 (−6)(+9)−(1)(6)−6 = 10 (−6)(10)−0

−6 = 10

s1 (10)(6)−(10)(−6)10

= 12

s0 10

There are two changes of sign in the first column from +1 to -6 and from -6 to +10. Therefore,we conclude that there are two roots in the right half plane.

In order to avoid labor in the calculations, the coefficients of any row may be multiplied ordivided by a positive number without changing the sign of the first column.

Two special cases may occur:

1. A zero in the first column.

2. A row in which all coefficients are zero.

Special Case 1:

When the first term in a row is zero, but other terms in the row are not zero, one of thefollowing two methods can be used to obviate the difficulty.

1. Replace the zero with a small positive number and proceed to compute the remaining termsin the array.

2. In the original polynomial, substitute 1

xfor s and find the number of roots of x which have

positive real parts. This is also the number of roots of s with positive real parts.

Example 5.2

We illustrate the above methods on the polynomial

D(s) = s5 + 3s4 +4s3 + 12s2 + 35s + 25 (5.18)

Method 1:

The array is as follows

59

Page 60: Linear Transforms

s5 1 4 35

s4 3 12 25

s3 0 80

3

s3 80

3

s2 12 − 80 25

s1 [(12− 80 )803− 25 ]/[12 − 80 ]

s0 25

The first term in the s2 row is negative as → 0 and the first term in the s1 row becomes

approximately +80

3. The signs of the first column are

s5 +

s4 +

s3 +

s2 −

s1 +

s0 +

There are two changes of sign, hence, there are two roots in the right half plane.

Method 2:

We replace s by 1/x in the original polynomial

D(1

x) = (

1

x)5 +3(

1

x)4 + 4(

1

x)3 + 12(

1

x)2 + 35(

1

x) + 25 (5.19)

= (25x5 +35x4 + 12x3 + 4x2 + 3x +1) .1

x5

and form the array

x5 25 12 3

x4 35 4 1

x3 320

35

80

35

x3 4 1

x2 − 19

41

x1 3519

x0 1

60

Page 61: Linear Transforms

Since there are two changes of sign, there are two roots of x with positive real parts. Hence,there are two roots of s with positive real parts.

Special Case 2:

When all coefficients in any one row are zero, we make use of the following procedure:

1. The coefficient array is formed in the usual manner until all zero coefficient row appears.

2. The all zero row is replaced by the coefficients obtained by differentiating an auxiliary equa-tion which is formed from the previous row. The roots of the auxiliary equation which arealso the roots of the original equation occur in pairs and are the opposite of each other. Theoccurrence of all zero row usually means that two roots lie on the jω axis. This conditionoccurs, however, any time the polynomial has two equal roots with opposite signs or twopairs of complex conjugate roots.

Example 5.3

Consider the following polynomial:

F (s) = s6 +4s5 + 11s4 + 22s3 + 30s2 + 24s+8 (5.20)

The coefficient array is written

s6 1 11 30 8

s5 4 22 24

s4 11

224 8

s3 0 80

3

s3 50

11

220

11multiply by 11

50

1 4

s2 2 8 divide by 2

1 4

s1 0

{2} . . . . . . replaced

s0 4

The existence of all zero s1 row indicates the presence of equal roots of opposite sign. Theauxiliary equation is

s2 + 4 = 0 (5.21)

Differentiating Eq. (5.21)

2s+ 0 = 0 (5.22)

61

Page 62: Linear Transforms

So the coefficient pf 0s0 is 2

Since there are no changes of sign in the first column, there is no root which has positivereal part. The roots are

s = ±2j (5.23)

These roots give rise to an undamped sinusoidal response.

Example 5.4

As a practical example of the use of the Routh Hurwitz method, consider the servomechanismof Fig. 5.3. whose block diagram is given in Fig. 5.4. The overall transfer function of thesystem is

C(s)

R(s)=

AKsKm

τaτms3 + (τa + τm)s2 + s + aKsKm

(5.24)

whereA = The amplifier gainKs = Potentiometer sensitivity Volts/radianKm = Motor constant, radians/Volt/Secτm = Motor time constantτa = Amplifier time constant; secC(s)= Laplace transformed output positionR(s) = Laplace transformed input position

Suppose we wish to determine the effect of the amplifier time constant τa upon the systemstability. To do this we shall find the relationship between the variables for marginal stability(i.e. two equal imaginary roots). The Routh Hurwitz array is established

s3 τaτm 1

s2 (τa + τm) AKsKm

s1(τa + τm) − AKsKmτaτm

τa + τm

We know that if all the coefficients in a row are zero, we have two equal roots of oppositesign. We can obtain all zeros in a single row by setting the first term in the s1 row equal tozero with the result

1

A=

τaτmτa + τm

KsKm (5.25)

Since Eq. (5.25) gives us the relation between A and τa , we need not continue further. Eq.(5.25) is plotted in Fig. 5.5 which shows the effect of τ upon system stability.

Note that the larger the amplifier time constant the smaller the amplifier gain (for stability).

62

Page 63: Linear Transforms
Page 64: Linear Transforms

5.4 ROOT LOCUS ANALYSIS

The root locus method is based upon the knowledge of the location of the roots of thesystem (Fig. 5.6), with the feed back loop opened H(s) = 0. In most cases, these areeasily determined from the loop transfer function which for the root locus method is writtenas KGH = (G(s)H(s)), where K is the constant portion of the loop gain, G(s) is theforward loop transfer function, and H(s) is the feedback loop transfer function. The rootlocus traces the location of the poles of the closed-loop transfer function C(s)/R(s) (zeros of1 +KG(s)H(s) = 0) in the s-plane, as K is varied from 0 to ∞. The loop transfer functionKGH is written as a ratio of factored polynomials e.g.,

KGH =K1(sτ1 + 1)(sτ3 + 1)

sn(sτ2 +1)(sτ4 + 1)(5.26)

which may be rewritten in the form

KGH =K1τ1τ3(s+

1

τ1)(s + 1

τ3)

τ2τ4sn(s +1

τ2)(s + 1

τ4)

(5.27)

Each term in KGH is a complex number and may be written in polar form as

s+1

τ1= A1e

jφ1 (5.28)

Hence the entire KGH function is a complex quantity and is written in polar form as

KGH =K(A1e

jφ1)(A3ejφ3)

(An0e

jnφ0)(A2ejφ2)(A4ejφ4)(5.29)

=KA1A3

An0A− 2A4

ej[(φ1+φ3)−(nφ0+φ2+φ4)] (5.30)

= Aejφ (5.31)

Where K = K−1φ1φ3φ2φ4

. The algebraic equation from which the roots are determined is

1 +KGH = 1+ Aejφ = 0 (5.32)

which furnishes the two expressions

Angle of KGH = arg KGH = φ = (2k + 1) 180

k = 0, 1, 2, 3 . . . (5.33)

and

Magnitude of KGH = |KGH| = A = 1 (5.34)

Eqns. (5.33) and (5.34) are the result of setting 1 + KGH = 0. The locus is plotted byfinding all points s which satisfy Eq. (5.33). With the locus plotted Eq. (5.34) is used todetermine the gain k at points along the locus.

64

Page 65: Linear Transforms

.

R(s) = Refrence inputC(s)= Output (controlled variable)B(s)= feedback signalE(s)= R(s) − C(s)= error signalA(s) = actuating signal

G(s)= C(s)

A(s)= open-loop transfer function

C(s)

R(s)=

G(s)

1 + G(s)H(s)=

G(s)

1 +KG(s)H(s)= closed-loop transfer function

H(s) = feedback path transfer function

KG(s)H(s) = loop transfer function

65

Page 66: Linear Transforms

The rules that are stated and demonstrated in this section enable the engineer to sketchthe locus diagram rapidly. The following loop transfer function is used to demonstrate themethod.

KGH =K(s+ 10)

s(s+ 5)(s + 2+ j15)(s +2 − j15)(5.35)

(The poles and zeros of Eq. (5.35) are shown in Fig. 5.7.)

Rule 1. Continuous curves which comprise the locus start at each pole of KGH, forK = 0.The branches of the locus which are single valued functions of gain, terminate on the zeros ofKGH for K =∞. In Eq. (5.35), there exist four branches, starting from the poles locatedat s = 0, s = −5, and s = −2 ± j15. (see Fig. 5.11) Since there is only one finite zeroat s = −10, three of the branches must terminate at infinity. The rule can be expandedto read, that locus starts at poles and terminates on either a finite zero or zeros located ats =∞. The gain K is usually positive and varies from 0 to ∞ .

Rule 2. The locus exists at any point along the real axis where an odd number of polesplus zeros KGH are found to the right of the point.

In Fig. 5.7 the locus exists along the real axis from the origin to the point at s=-5 and s=-10to minus infinity (see Fig. 5.11). Any complex poles and zeros are ignored while applyingthis rule.

Rule 3. For large values of gain, the branches of the locus are asymptotic to the angles

(2k + 1)180deg

p − zk = 0, 1, 2, · · ·

where p is the number of poles and z is the number of zeros. If the number of poles exceedsthe number of zeros, some of the branches will terminate on zeros that are located at infinity.Fig 5.8 illustrates Rule 3 for the above example. Here p = 4, z = 1, and the asymptoticangles are computed as follows:

θk =(2k + 1)180 deg

p− zk = 0, 1, 2

θ0 = 60deg θ1 = 180deg θ2 = −60deg (5.36)

K = 0, K = 1, K = 2

Rule 4. The starting point for the asymptotic lines is given by

CG =Σpoles− Σzero

p − z

which is termed as the centre of gravity of the roots. The angular directions to which theloci are asymptotic are given by Rule 3. For the example

Σpoles = (−5) + (−2 + j15) + (−2 − j15) + 0 = −9Σzeros = −10p = 4, z = 1

66

Page 67: Linear Transforms

CG =(−9) − (−10)

3=1

3(5.37)

The asymptotic lines found in Eq. (5.36) start from the centre of gravity. These lines areplaced on the s-plane as shown in Fig. 5.9

Since the complex poles and zeros always appear in conjugate pairs i.e., equal vertical dis-tances from the real axis, the centre of gravity always lies on the real axis.

Rule 5. The break away point for real axis roots Jb for Eq. (5.31) is found from theequation:

1

σnb

+1

σb +1τ2

+1

σb+1τ4

+ =1

σb+1τ1

+1

σb +1τ3

(5.38)

where σb + 1/τ1 is the magnitude of the distance between the assumed break away point σband the zero at −1/τ1. In the example, σb = −2.954,−14.05

Rule 6. Two roots leave (or strike) the axis at the break away point at an angle of ±90deg.If n root loci are involved at a breakaway point , they be 180/n degrees apart.

Rule 7. The angle of departure from the complex poles and the angle of arrival to complexzeros is found by using Eq. (5.33).

The initial angle of departure of the roots from complex poles is helpful in sketching rootlocus diagrams. The pole-zero configuration of Fig. 5.7 is redrawn in Fig. 5.10 for thepurpose of finding the angle of departure from the complex pole at s= −2+j15. The anglessubtended by the poles and zeros to the pole in question are added (positive for zeros andnegative for pole).

−(θ1 + θ2 + θ3 + θ5) + θ4 = 180 deg (5.39)

or−(97.6 deg+90 deg+78.7 deg+θ5) + 62 deg = 180deg

The root locus procedure is based upon the location of the poles and zeros of KGH in thes-plane. These points do not move. They are merely the terminal points for the loci of theroots of 1 +KGH. If the locus crosses the imaginary axis for some gain K , the systembecomes unstable at this value of K. The degree of stability is determined largely by theroots near the imaginary axis. The root locus sketch for the system of Eq. (5.35) is shownin Fig. 5.11.

The rules for sketching the locus are based on the angle criterion of Eq. (5.33) which isrepeated here

argKGH = (2k + 1)180 deg

After the locus is sketched and certain points located more accurately, the values of gainthat occur at certain points along the locus must be found. The gain K at a point s1 isevaluated from the criterion of Eq. (5.38)

K =1

|G(s1)H(s1)|

67

Page 68: Linear Transforms
Page 69: Linear Transforms

where |GH | is the product of the magnitudes of the distances from the zeros to the point atwhich the gain is to be evaluated, divided by the product of the distances to the poles. Thus

K =product of pole distances

product of zero distances(5.40)

The magnitudes are measured directly from the root locus plot. If no zeros are presentin the transfer function, the product of the zero distances is taken equal to unity. In theexample, K ≈ 194 at the breakaway point s = −2.95. At the point where the locus crossesthe imaginary axis, K = 751.. This value of K (on the verge of instability) can also bedetermined from the Routh Hurwitz criterion

69

Page 70: Linear Transforms

PROBLEMS

5.1. Consider the system equation y(t) + a(y(t) = f(t). Find Y (s), for f(t) = 0, y(0) =1, obtain y(t) by inverse Laplace transforming Y (s). Plot y(t) for i) a > 0, ii) a < 0.Locate the poles of Y (s) in the s-plane. Discuss stability.

5.2. By means of Routh Hurwitz stability criterion, determine the stability of the systemswhich have the following characteristic equations:

a) s3 + 20s2 + 9s +100 = 0

b) s4 +2s3 + 6s2 + 8s+8 = 0

c) s6 +2s5 + 8s4 + 12s3 +20s2 +16s + 16 = 0

In each case, determine the number of roots in the right half plane, the left half plane,and on the jω axis.

5.3. The characteristic equations for certain systems are given below.

a) s4 + 22s3 + 2s +K = 0

b) s3 + (K + 0.5)s2 + 4Ks +50 = 0

In each case, determine the values of K which correspond to a stable system.

5.4. For each of the following transfer functions locate the zeros and poles. Draw root locus

sketches for the closed-loop systemG(s)

1 + G(s)Discuss the stability of each case.

a) G(s) =K

s(2s + 1)

b) G(s) =K(s + 1)

(s2 + s + 10)

c) G(s) =K

s(s + 1)(s2 + s + 10)

5.5. Draw the root locus of a unity-feedback system with (H(s) = 1in Fig. 5.6) the open-looptransfer function

G(s) =K

s(s + 1)(s +3.5)(s+ 3+ j2)(s + 3− j2)

5.6. Sketch the root-locus diagrams of the system in Fig. 5.6. Discuss stability. The quan-tities KG and H are defined below.

a) KG =K

s2 + 2s +100, H =

1

s

b) KG =K(s +2)

s(s+ 20),H =

s+4

s2

70

Page 71: Linear Transforms

CHAPTER - VI

FOURIER SERIES

INTRODUCTION

In the early years of 19th century the French mathematician J.B.J. Fourier was led to thediscovery of certain trigonometric series during his research on heat conduction, which nowbear his name. Since that time Fourier series and generalization to Fourier integrals andorthogonal series, have become an essential part of the background of scientists, engineersand mathematicians from both an applied and theoretical point of view. This trigonometricseries is now required in the treatment of many physical problems, such as, in the theory ofsound, heat conduction, electromagnetic waves, electric circuits and mechanical vibrations.

6.1 EULER-FOURIER FORMULAS

A function f(x) can be represented by a trignometric series as follows:

f(x) =1

2a0 +

X(an cos nx + bn sin nx) (6.1)

Let us assume that f(x) is known on the interval (−π, π) and coefficients an and bn are tobe found. It is convenient to assume that the series is uniformly convergent, so that it canbe integrated term by term from −π to π. Since

Z π

−πcos nxdx =

Z π

−πsin nxdx = 0 for n= 1, 2, . . . (6.2)

the calculation yields

Z π

−πf(x)dx = a0π (6.3)

The coefficient an is determined similarly. Thus, if we multiply Eq. (6.1)by cos nx, thereresults

f(x) cos nx =1

2a0 cos nx+ . . . + an cos

2 nx (6.4)

where the missing terms are the products of the form sinmx . cos nx, or of the form cos nx. cos mx with m 6= n.

It is easily verified that for integral values of m and n

Z π

−πsinmxcos nxdx = 0, in general (6.5)

and Z π

−πcosmxcos nxdx = 0, when m 6= ±n (6.6)

71

Page 72: Linear Transforms

and hence integration of Eq. (6.4) yields

Z π

−πf(x) cos nxdx = an

Z π

−πcos2 nxdx = anπ (6.7)

so that

an =

Z π

−πf(x) cos nxdx (6.8)

In Eq. (6.8), if n = 0

a0 =1

π

Z π

−πf(x)dx (6.9)

(That is the reason for writing the constants term as 1

2a0 rather than a0 ).

Similarly, multiplying Eq. (6.1) by sin x and integrating yields

bn =1

π

Z π

−πf (x) sin nxdx (6.10)

The formulas of Eq. (6.8) and Eq. (6.10) are called Euler-Fourier formulas and series in Eq.(6.1) which results when an and bn are determined is called the Fourier series of f (x).

Example 6.1

Represent the function f(x) = x by Fourier series over the interval (−π, π)

an =1

π

Z π

−πxcos nxdx = 0 (6.11)

bn =1

π

Z π

−πxsin nxdx (6.12)

=−2n

cos nπ

=2

n(−1)n+1

Substituting in Eq. (6.1)

f (x) = 2(sin x − sin 2x

2+

sin 3x

3. . .) (6.13)

6.1.1 PERIODIC FUNCTIONS

A function f (x) is said to be periodic if f(x+ p) = f(x) for all values of x, where p is a nonzero constant. Any number p with this property is a period of f(x). For instance, sin nxhas the period, 2π, −2π, 4π, . . . Now each term of Eq. (6.13) has a period of 2π and hencethe sum also has a period of 2π and the sum is equal to x on the interval −π < x < π andnot on the whole interval −∞ < x <∞. It remains to be seen what happens at the pointx = ±π, ±3π, . . ., where the sum of the series exhibits an abrupt jump from −π to π. Uponsetting x = ±π,±3π, . . . in Eq. (6.13), we see that every term is zero. Hence the sum iszero.

72

Page 73: Linear Transforms

The term an cos nx + bn sin nx is sometimes called the nth harmonic, and a0 is called thefundamental or d − c term of the Fourier series.

Example 6.2

Find the Fourier series of the function defined by

f(x) = 0 if − π ≤ x < 0 (6.14)

f (x) = π if 0 ≤ x ≤ π

a0 =1

π

∙Z 0

−π0.dx +

Z π

0

πdx

¸= π (6.15)

an =1

π

Z π

0

π cos nxdx = 0, f or n ≥ 1 (6.16)

bn =1

π

Z π

0

π sin nxdx (6.17)

=1

n(1− cos nπ)

The factor (1− cos nπ) assumes the following values as n increases.

n 1 2 3 4 5 61− cos nπ 2 0 2 0 2 0

Determining bn by this table, we obtain the required Fourier series

f(x) =π

2+ 2(

sin x

1+

sin 3x

3+

sin 5x

5+ . . .) (6.18)

The successive partial sums are

y0 =π

2, y1 =

π

2+ 2sin x, y2 =

π

2+ 2sin x+

2sin 3x

3etc. (6.19)

Example 6.3

Find the Fourier series of the function defined by

f(x) =n−π, π < x < 0

π, 0 < x < π

f(x) =π

2+ 2

µsin 2x

2+

sin 4x

4+

sin 6x

4+ . . .

¶(6.21)

is obtained.

73

Page 74: Linear Transforms

6.2 REMARKS ON CONVERGENCE

In Eq. (6.1) each term has a period of 2π and hence if f(x) is to be represented by the sum,f(x) must also have a period of 2π. Whenever we consider a series, such as Eq. (6.1), weshall suppose that f(x) is on the interval (−π, π) and outside this interval f (x) is determinedby the periodicity condition

f(x +2π) = f(x) (6.22)

The simple discontinuity is used to describe the situation that arises when the functionsuffers a finite jump at a point x = x0 (Fig. 6.2).

Analytically this means that two limiting values of f(x) as x approaches x0 from the right andleft hand sides exist, but are unequal, that is limf (x0+) 6= limf (x0−). A function f(x) issaid to be bounded if |f(x)| < M holds for some constantM and for all x under consideration.For example, sin x is bounded, but the function f(x) = x−1 for x 6= 0, (f (0) =∞) is not,even though the latter is well defined for every value of x. It can be shown that if a boundedfunction has finite number of maxima and minima and only a finite number of discontinuities,all its discontinuities are simple. That is f(x+) and f (x−) exist at every value of x. Thefunction sin(1/x) has infinitely many maxima near x = 0 and the discontinuity at x = 0 isnot simple. The function defined by the

f(x) = x2 sin(1

x) x 6= 0, f (0) = 0

also has infinitely many maxima near x = 0, although it is continuous and differentiable forevery value of x. The behaviour of these two functions is illustrated graphically in Fig. 6.3and Fig. 6.4.

6.3 DIRICHILET’S THEOREM

Suppose f(x) is defined on the interval (−π, π), is bounded, has only a finite number ofmaxima and minima and only a finite number of discontinuities. Let f(x) be defined forother values of x by the periodicity condition f (x+ 2π) = f(x). The Fourier series for

1

2[f (x+)+ f(x−)] (6.23)

converges at every value of x, and hence it converges to f(x) at points where f (x) is contin-uous.

The condition imposed on f(x) are called Dirichlet’s conditions after the mathematicianDirichlet who discovered the theorem.

Example 6.4

Consider the Fourier series of a periodic function defined by

f(x) = −π for − π < x < 0

f(x) = x for 0 < x < π (6.24)

74

Page 75: Linear Transforms
Page 76: Linear Transforms

an =1

π

∙Z 0

−π−π cos nxdx+

Z π

0

xcos nxdx

¸=1

π

∙cos π − 1

n2

¸(6.25)

For n = 0 a0 = −π2Similarly,

bn =1

π

∙Z 0

−π−π sin nxdx+

Z π

0

xsin nxdx

¸=1

n[1− 2 cos nπ] (6.26)

Therefore, the Fourier series is

f(x) = −π4− 2

πcos x − 2

π

cos 3x

32− 2

π

cos 5x

52− . . .

+3sin x − sin 2x

2+3sin 3x

3− sin 4x

4+3sin 5x

5− . . . (6.27)

By Dirichlet’s theorem, equality holds at all points of continuity, since f (x) has been definedto be periodic. At the points of discontinuity x = o and x = π, the series converges

f (0+)+ f (0−)2

2, and

f(π+) + f (π−)2

= 0 (6.28)

respectively. Either condition leads to the interesting expansion

π2

8=1

12+1

32+1

52+1

72+ . . . (6.29)

as is seen by making substitution in Eq. (6.27).

6.4 EVEN AND ODD FUNCTIONS

For many functions, the Fourier sine and cosine coefficients can be determined by inspection.A function f(x) is said to be even if

f(−x) = f(x) (6.30)

and the function f (x) is odd if

f(−x) = −f(x) (6.31)

For example, cos x and x2 are even and x and sin x are odd.

76

Page 77: Linear Transforms

Also, Z α

−αf(x)dx = 2

Z α

0

f(x)dx if f (x) is even (6.32)

and Z α

−αf (x)dx = 0 if f(x) is odd (6.33)

Products of even and odd functions obey the rules

(even) (even) = even

(even) (odd) = odd

(odd) (odd) = even

The product of sin nx and cosmx is odd and

Z α

−αsin nx cosmxdx = 0 (6.34)

Theorem 6.1: If f (x), defined in the interval −π < x < π is even, Fourier serieshas cosine terms only and the coefficients are given by

an =2

π

Z π

0

f(x) cos nxdx, bn = 0 (6.35)

If f(x) is odd, the series has sine terms and the coefficients are given by

bn =2

π

Z π

0

f(x) sin nxdx, an = 0 (6.36)

In order to see this, let f (x) be even. Then f (x) cos nx is the product of even functions.Therefore,

an =1

π

Z π

−πf(x) cos nxdx (6.37)

=2

π

Z π

0

f (x)cos nxdx

On the other hand f(x) sin x is an odd function, so that

bn =1

π

Z π

−πf(x) sin nxdx = 0 (6.38)

Example 6.5

Consider the function in Fig.6.5 where, f(x) = x, −π < x < π.

77

Page 78: Linear Transforms

Since the function is odd, the Fourier series reduces to a sine series

bn =2

π

Z π

0

x sin nxdx (6.39)

=2

π

∙−x cos nx

n

¯π0

+1

n

Z π

0

cos nxdx

¸(6.40)

=2

π

∙−x cos nx

n

¯π0

+sin nx

n2

¯π0

¸=2

n(−1)n+1 (6.41)

x = 2(sin x − sin 2x

2+

sin3x

3− sin4x

4+ . . .) − π < x ≤ π (6.42)

Example 6.6

Consider the function in Fig. 6.6 and write its Fourier series.

f(x) = |x| or − π ≤ x ≤ π

The function is even, hence

an =1

π

Z π

−π|x| cos nxdx (6.43)

=2

π

Z π

0

xcos nxdx (6.44)

a0 =2

π

Z π

0

xdx = π (6.45)

an =2

π

Z π

0

xcos nxdx

=2

π

∙x sin nx

n

¯π0

−Z π

0

sin nx

ndx

¸(6.46)

=2

n2π[(−1)n − 1]

|x| = π

2− 4

π

∙cos x

12+

cos 3x

32+

cos 5x

52+ . . .

¸− π ≤ x < π (6.47)

Since |x| = x for x ≤ 0, series in Eqs.(6.42) and (6.47) converge to the same function x when0 ≤ x < π. The first expansion of Eq. (6.42) is called Fourier sine series for x and Eq. (6.47)is the Fourier cosine series. Any function f(x) defined in (0, π) which satisfies the Dirichlet’sconditions can be expanded in a sine series and cosine series on 0 < x < π. To obtain thesine series, we extend f(x) over the interval −π < x < 0 in such a way that the extendedfunction is odd.

The Fourier series for f(x) consists of sine terms only since f(x) is odd.

78

Page 79: Linear Transforms
Page 80: Linear Transforms

Example 6.7

Obtain a cosine series and also a sine series for sin x

For the cosine series

an =2

π

Z π

0

sinx cos nxdx (6.48)

=2(1 + cos nπ)

π(1− n2)n 6= 1 (6.49)

For n = 1, the result of integration is zero, hence

sin x =2

π− 4

π

µcos 2x

22 − 1 +cos 4x

42 − 1 +cos 6x

62 − 1 + . . .

¶,

when 0 < x < π. Since the sum of the series is an even function, it converges to | sin x|rather than sin x when π < x < 0.

To obtain a sine series, an = 0 n > 2

bn =2

π

Z π

0

sin x sin nxdx = 0 n > 2

1 n = 1 (6.50)

Hence the Fourier sine series for sin x is sin x. That is just not a coincidence as shown bythe following.

6.4.1 UNIQUENESS THEOREM:

If two trignometric series of the form of Equation (6.1) converge to the samesum for all values of x, then corresponding coefficients are equal.

6.5 EXTENSION OF INTERVAL

The methods developed upto this point restrict the interval of expansion to (−π, π). Inmany problems, it is desired to develop f (x) in Fourier series that will be valid over a widerinterval. By letting the length of the interval increase indefinitely, one may get an expansionvalid for all x.

To obtain an expansion valid on the interval (−T, T ) change the variable from x to TZ/π

If f (x) satisfies the Dirichlet’s conditions on (−T, T ), the function f(TZ/π) can be developedin a Fourier series in Z.

f

∙TZ

π

¸=

a02+

∞Xn=1

an cos nz +

∞Xn=1

bn sin nz (6.51)

from −π ≤ z < π, Since z = πxT, the series in Eq. (6.3) becomes

80

Page 81: Linear Transforms

f (x) =a02+

∞Xn=1

an cosnπx

T+

∞Xn=1

bn sinnπx

T(6.52)

By applying Eq. (6.8) to Eq. (6.51)

an =1

π

Z π

−π

∙TZ

π

¸cos nz dz

=1

T

Z T

−Tf (x) cos

nπx

Tdx (6.53)

and

bn =1

T

Z T

−Tf(x) sin

nπx

Tdx (6.54)

Example 6.8

Letf (x) = 0 for − 2 < x < 0

f (x) = 1 for 0 < x < 2 (6.55)

a0 =1

2

∙Z 0

−20.dx+

Z 2

0

1.dx

¸= 1 (6.56)

an =1

2

∙Z 0

−20. cos

nπx

2dx+

Z 2

0

1. cosnπx

2dx

¸(6.57)

=1

nπsin

nπx

2

¯20

= 0

bn =1

2

∙Z 0

2

0. sinnπx

2dx+

Z 2

0

1. sinnπx

2dx

¸(6.58)

=1

nπ(1− cos nπ)

f(x) =1

2+2

π(sin πx +

1

3sin

3πx

2+1

5sin

5πx

2+ . . .) (6.59)

Subject to Dirichlet conditions, the function can be chosen arbitrarily on the interval (−T, T)and it is natural to enquire if a representation for arbitrary function on (−∞,∞) might beobtained by letting T →∞. We shall see that such a representation is possible. The processleads to Fourier Integral Theorem, which has many practical applications. Assume that f(x)satisfies the Dirichlet conditions in every interval (−T, T), no matter how large, and thatthe integral

81

Page 82: Linear Transforms

M =

Z ∞

−∞|f(x)|dx (6.60)

converges. As we have just seen f(x) is given by

f (x) =a02+

∞Xn=1

an cosnπx

T+

∞Xn=1

bn sinnπx

T(6.61)

where

an =1

T

Z T

−Tf (t)cos

nπt

Tdt, bn =

1

T

Z T

−Tf (t)sin

nπt

Tdt (6.62)

Substituting these values of coefficients in Eq. (6.61)

f (x) =1

2T

Z T

−Tf(t)dt+

1

T

∞Xn=1

Z T

−Tf(t) cos

nπ(t − x)

Tdt (6.63)

Since,

cosnπt

Tcos

nπx

T+ sin

nπt

Tsin

nπx

T= cos

nπ(t− x)

T(6.64)

Moreover,R ∞−∞ |f (x)|dx is assumed to be convergent¯

1

2T

Z T

−Tf (t)dt

¯≤ 1

2T

Z T

−T|f (t)|dt ≤ M

2T(6.65)

which obviously tends to zero as T is allowed to increase indefinitely. Also, if the interval(−T, T ) is made large enough, the quantity π/T which appears in the integrands of the sum,can be made as small as desired. Therefore, the sum in Eq. (6.63) can be written as

1

π

∞Xn=2

∙Z T

−Tf (t) cos n∆ω(t− x)dt

¸(6.66)

where ∆ω = π

Tis very small.

The sum suggests the definition of the definite integral of the function

F(ω) =

Z T

−Tf(t) cos ω(t− x)dt

in which the values of the function F (ω) are calculated at the points n∆ω. For large valuesof T

82

Page 83: Linear Transforms

Z T

−Tf(t) cos ω(t − x)dt (6.67)

differ little from

Z ∞

−∞f (t)cos ω(t− x)dt (6.68)

and it appears plausible that as T increases indefinitely, the sum will approach the limit

1

π

Z ∞0

Z ∞

−∞f(t)cos ω(t− x)dt (6.69)

If such is the case, then Eq. (6.63) can be written as

f(x) =1

π

Z ∞

0

Z ∞

−∞f (t) cos ω(t− x)dt (6.70)

The foregoing discussion is heuristic and cannot be regarded as a rigorous proof. However,the validity of the formula can be established rigorously, if the function f(x) satisfies theabove conditions. This formula assumes a simpler form if f (x) is even or an odd function.Expanding the integrand of the integral:

1

π

Z ∞

0

∙Z ∞

−∞f(t) cos ωt cos ωxdt +

Z ∞

−∞f (t) sinωt sin ωxdt

¸(6.71)

If f(t) is odd, then f (t) cos ωt is an odd function. Similarly, f (t) sinωt is even. So that

f (x) =2

π

Z ∞

0

Z ∞

0

f(t) sin ωt sin ωxdt (6.72)

when f(x) is odd. A similar argument shows that if f(x) is even, then

f(x) =2

π

Z ∞

0

Z ∞

0

f (t) cos ωt cos ωxdt (6.73)

If f(x) is defined in (0,∞), then both the above integrals be used.

Since the Fourier Series converges to 1

2[f(x+)+f (x−)] at points of discontinuity, the Fourier

integeral also does. In particulars, for an odd function, the integral converges to zero atx = 0, this fact is verified by setting x = 0, in

f (x) =2

π

Z ∞

0

Z ∞

0

f(t) sin ωt sin ωxdt

83

Page 84: Linear Transforms

Example 6.9 By

f (x) =2

π

Z ∞

0

f(t) cos ωt cos ωxdt (6.74)

obtain the formula

Z ∞

0

sinω cos ωx

ωdω =

π

2if 0 ≤ x < 1 (6.75)

4if x = 1

= 0 if x > 1

We choose f (x) = 1 for 0 ≤ x < 1 and f (x) = 0 for x > 1., Then

Z 1

0

f(t) cos ωtdt =

Z 1

0

cos ωtdt =sinω

ω, ω 6= 0 (6.76)

substituting in Eq. (6.76)

Z ∞

0

sin ω

ωcos ωxdx =

π

2f(x) (6.77)

Upon recalling the definition of f(x), we see that the desired result is obtained for 0 ≤ x < 1.The fact that the integral is π

4when x = 1 follows from

1

2=

f(1−) + f(1+)

2(6.78)

6.6 COMPLEX FOURIER SERIES - FOURIER TRANSFORM

The Fourier series

f (x) =a0

2+

∞Xn=1

(an cos nx + bn sin nx)

with

an =1

π

Z π

−πf(t)cos nt dt

bn =1

π

Z π

−πf(t)sin nt dt

can be written with the aid of Euler formula

ejµ = cos µ+ j sin µ (6.79)

84

Page 85: Linear Transforms

in an equivalent form, namely

f (x) =∞X

n=−∞

Cnejnx (6.80)

The coefficients Cn are defined by the equation

Cn =1

2

Z π

−πf(t)e−jntdt (6.81)

and the limit is interpreted by taking the sum from −n to +n letting n → ∞. Thus, theindex n runs through all positive and negative integral values including zero. This can beshown as below. If the series

f (x) =

∞Xn=−∞

Cnejnx

is uniformly convergent, we can obtain the above formula for Cn . Replace x by t and thedummy index n by m, so that

f(t) =

∞Xm=−∞

Cnejmt (6.82)

Multiplying by e−jnt

f (t)e−jnt =

∞Xm=−∞

Cmej(m−n)t (6.83)

If we now integrate from −π to π the terms with m = n integrate to zero and the term withm 6= n give 2πCn giving

Cn =1

2

Z π

−πf (t)e−jnt dt (6.84)

Example 6.10

Consider the function f (x) = eαx on (−π, π)

Hence,

2πCn =

Z π

−πeαte−jnt dt (6.85)

Z π

−πe(α−jn)tdt (6.86)

85

Page 86: Linear Transforms

Cn =eαπ − e−απ

(−1)nα− jn

=sinhα

π

(−1)n(α+ jn)

α2 + n2(6.87)

Hence,

eαx =sinh πα

π

∞Xn=−∞

(−1)nα2 + n2

(α+ jn)ejnx (6.88)

Example 6.11

Consider the rectangular pulse train shown in Fig. 6.7 and draw its amplitude spectra.

The pulse width is τ and the period is T . Therefore,

Cn =1

T

Z τ/2

−τ/2Ae−jnωtdt

Since, it is an even function

Cn =2

T

Z τ/2

0

A cos nωtdt

=2A

T

sin nωt

nω|τ/20 =

2A

T

sin nωτ/2

=Aτ

Tsin cntf τ f =

ω

2π=1

T

86

Page 87: Linear Transforms

sin cnt =sin nt

nt

Therefore, the Fourier series is as follows

f (t) =Aτ

T

∞Xn=−∞

sin(nfτ)ejnωt

The amplitude spectrum then is drawn below

Let us now write the Fourier Integral Theorem as

f (x) = limA→∞

1

Z A

−Adω

Z ∞

−∞ejω (x−t)dt (6.89)

when f(x) satisfies the Dirichlet conditions

g(ω) =1

Z ∞

−∞e−jωf (t)dt (6.90)

then,

f(t) = limA→∞

1

Z A

−Aejωtg(ω)dω (6.91)

The transform T defined by

T (f) =1

Z ∞

−∞e−jωtf(t)dt (6.92)

is called the Fourier transform. It is one of the most powerful tools in modern analysis.

87

Page 88: Linear Transforms

Although, the formulas of Eq. (6.91) and Eq. (6.92) are similar, the conditions on thefunctions f and g are quite different. A more symmetric theory can be based on a type ofconvergence known as mean convergence. Let gA(t) be an integrable function of t on eachfinite interval for each value of parameter A. It is said that gA(t) converges in mean to g(t)and we write

g(t) = limA→∞

gA(t) (6.93)

If it is true that

limA→∞

Z ∞

−∞|g(t) − gA(t)|2dt = 0 (6.94)

As an illustration

g(t) = limA→∞

Z A

−Ae−jω tf (t)dt (6.95)

means that Eq. (6.94) holds with gA(t) replaced by the integral on the right of Eq. (6.95).One can write g(t) in Eq. (6.95) as an integral from −∞ to +∞, if it is stated that theequation holds in the sense of mean convergence.

6.6.1 PLACHEREL’S THEOREM: Let f (t) and g(ω) be integrable on every finiteinterval, and suppose Z ∞

−∞|f (t)|2dt or

Z ∞

−∞|g(ω)|dω (6.96)

is finite, then if either of the equations

g(ω) =1

Z ∞

−∞f (t)e−jωtdt

f (t) =1

Z ∞

−∞g(ω)ejωtdω (6.97)

holds in the sense of mean convergence, so does the other, and the two integrals of Eq. (6.96)are equal. This is in the sense of ordinary Reimann integral.

6.7 ORTHOGONAL FUNCTIONS

A sequence of functions θn(x) is said to be orthogonal on the interval (a, b) ifZ b

a

θm(x)θn(x)dx = 0 for m 6= n

6= 0 for m = n (6.98)

For example, the sequence

θ1(x) = sin x, θ2(x) = sin 2x, θn(x) = sin nx

is orthogonal on (0, π) because

88

Page 89: Linear Transforms

Z π

0

θm(x)θn(x)dx =

Z π

0

sinmx sin nxdx = 0 for m 6= n

2for m = n (6.99)

The sequence

1, sin x, cos x, sin 2x, cos 2x

is orthogonal on (0, 2π) though not on (0, π)

The formula for Fourier coefficients is specially simple if the integeral has the value of 1for m = n. The function θn(x) are then said to be normalized and {θ(x)} is called anorthonormal set. If

Z b

a

[θn(x)]2dx = 1 (6.100)

In other words

Z b

a

φm (x)φn(x)dx = 0 for m 6= n (6.101)

= 1 for m = n

For example, since

Z 2π

0

1.dx = 2π,

Z 2π

0

sin2 nxdx = π,

Z 2π

0

cos2 nxdx = π

for n ≤ 1 and the orthonormal set is

(2π)−1/2, (π)−1/2 sinx, (π)−1/2 cos x, . . . , (π)−1/2 sin nx, (π)−1/2 cos nx

The product of two different functions in this set gives zero, but the square of each functiongives 1 when integrated from 0 to 2π. Let {φn(x)} be an orthonormal set of functions on(a, b) and suppose that another function f (x) is to be expanded in the form

f(x) = c1φ1(x) + c2φ2(x) + . . . + cnφn(x) (6.102)

To determine the coefficient cn, we multiply by φn(x) getting

f (x)φn(x) = c1φ1(x)φn(x) + c2φ2(x)φn(x) + . . . + cn(φn(x))2 (6.103)

If we formally integrate from a to b, the cross-product terms disappear, and hence

89

Page 90: Linear Transforms

Z b

a

f (x)φ(x)dx =

Z b

a

Cn [φn(x)]2dx (6.104)

The term to term integration is justified when the series is uniformly convergent and thefunctions are continuous. The foregoing procedure shows that if f(x) has an expansion ofthe desired type, the coefficients cn must be given by Eq. (6.(104) is called Euler-Fourierformula, the coefficients cn are called the Fourier coefficients of f (x) with respect to {φn(x)}and the resulting series

f(x) = c1φ1(x) + c2φ2(x) + . . . + cnφn(x) (6.105)

is called the Fourier series with respect to {φn(x)}.

6.8 MEAN CONVERGENCE OF FOURIER SERIES

If we try to approximate a function f(x) by another function pn(x), the quantity

|f(x) − pn(x)| or [f (x)− pn(x)]2 (6.106)

gives a measure of the error in the approximation. The sequence pn(x) converges to f(x)whenever the expression of Eq. (6.106) approaches zero as n→∞.

These measures of error are appropriate for discussing convergence at any fixed point x. Butit is often useful to have a measure of error which applies simultaneously to a whole intervalof x values, a ≤ x ≤ b. Such measure is easily found if we integrate Eq. (6.108) from a to b.

Z b

a

|f (x)− pn(x)|dx or

Z b

a

[f (x)− pn(x)]2dx (6.107)

These expressions are called mean error and mean-square error respectively. If either ex-pression of Eq. (6.107) approaches zero at n→∞ , the sequence p(x) is said to converge inmean to f(x) and the mean convergence is used.

The terminology is appropriate because if the integrals of Eq. (6.107) are multiplied by1/(b − a), the result is precisely the mean value of the corresponding expression of Eq.(6.106). Even though Eq. (6.107) involves an integration that is not present in Eq. (6.106)for Fourier series, it is much easier to discuss the mean square error and the correspondingmean convergence then the ordinary convergence. In the following discussion, we use f andφn as abbreviation for f(x) and φn(x) respectively and assume that f and φn are integrableon a < x < b. If integrals are improper, the convergence ofR b

af2dx and

R baφ2ndx is required.

Let {φn(x)} be a set of orthonormal functions on a ≤ x ≤ b, so that as in the precedingsection

Z b

a

φn(x)φm(x)dx = 0 for m 6= n (6.108)

= 1 for m = n

90

Page 91: Linear Transforms

We seek to approximate f(x) by a linear combination of φ(x). pn(x) = a1φ1(x) + a2φ2(x) +. . . + anφn(x) in such a way that mean square error of Eq. (6.107) is minimum.

E =

Z b

a

[f − (a1φ1 + a2φ2 + . . .+ anφn)]2dx = min (6.109)

Upon expanding the term in brackets, we see that Eq. (6.109) yields.

E =

Z b

a

f 2dx−2Z b

a

(a1φ1+a2φ2 + . . .+ anφn)fdx+

Z b

a

(a1φ1+ a2φ2+ . . .+anφn)dx (6.110)

If the Fourier coefficients of f relative to φk are denoted by ck.

ck =

Z b

a

φkfdx (6.111)

The second integral in Eq. (6.110) is

Z b

a

(a1φ1 + a2φ2 + . . . + anφn)fdx = a1c1 + a2c2 + . . . + ancn (6.112)

The third integral in Eq. (6.110) can be written as

Z b

a

(a1φ1 + a2φ2 + . . . + anφn)(a1φ1 + a2φ2 + . . . + anφn)dx (6.113)

=

Z b

a

a21φ21 + a22φ

22 + . . . + a2nφ

2ndx (6.114)

= a21 + a22 + . . . + a2n (6.115)

Where the second group of terms involves cross products φiφj with i 6= j and such termsintegrate to zero. Hence Eq. (6.110) yields.

E =

Z b

a

f2dx − 2nX

k=1

akck +nX

k=1

a2k (6.116)

for the mean square error in the approximation. In as much as

−2akck + a2k = −c2k + (ak − ck)2

The error E in Eq. (6.116) is also equal to

E =

Z b

a

f 2dx−nX

k=1

c2k +

nXk=1

(ak − ck)2 (6.117)

91

Page 92: Linear Transforms

Theorem 6.2: If {φn(x)} is a set of orthonormal functions, the mean square errorof Eq (6.109) can be written in the form of Eq (6.117) where ck are the Fouriercoefficients of f relative to φk .

From the two expressions of Eq. (6.112) and Eq. (6.117), we obtain a number of interestingand significant theorems. In the first place, the terms (ak − ck)

2 in Eq. (6.117) are positiveunless ak = ck , in which case they are zero. Hence the choice of ak that make E minimumis obvious by ak = ck and we have the following.

Corollary 1: The partial sum of the Fourier Series

c1φ1 + c2φ2 + . . . + cnφn, ck =

Z b

a

fφkdx (6.118)

gives a smaller mean square errorR b

a(f −φk)

2dx then is given by any linear combination a1φ1 + a2φ2 + . . .+ anφn upon settingak = ck in Eq. (6.117), we see that the minimum value of the error is

min.E =

Z b

a

f2dx −nX

k=1

c2k (6.119)

Now, the expression of Eq. (6.109) shows that E ≥ 0 because the integrand in Eq. (6.109)being a square, is not negative. Since E ≥ 0 for all choices of ak, it is clear that minimumof E (which arises when ak = ck is also greater or equal to zero. Hence,

"Z b

a

f2dx −nX

k=1

c2k

#≥ 0,

ornX

k=1

c2k ≥Z b

a

f2dx (6.120)

Upon letting n→∞, we obtain by the principle of monotone convergence.

Corollary 2: If ck =R b

afφkdx are the Fourier coefficients of f relative to the orthonormal

set φ , then the seriesP∞

k=1 c2k converges and satisfies the Bessel inequality

nXk=1

c2k ≤Z b

a

[f(x)]2 dx (6.121)

Since the general term of a convergent series must approach zero, we deduce the followingfrom Corollary 2.

Corollary 3: The Fourier coefficients cn =R b

afφndx tends to zero as n → ∞. For appli-

cations, it is important to know whether or not the mean square error approaches zero asn → ∞. Evidently, the error approaches zero for some choice of a0s only,if the minimumerror in Eq. (6.117) does so. Letting n→ ∞ in Eq. (6.1179), we get Parseval equality

92

Page 93: Linear Transforms

Z b

a

f2dx −∞Xk=1

C2k = 0 (6.122)

as the condition for zero error.

Corollary 4: If f is approximated by the partial sum of its Fourier series, the mean squareerror approaches zero as n→ ∞ if and only if Bessel inequality becomes Parseval’s equality

∞Xk=1

c2k =

Z b

a

[f(x)]2dx (6.123)

In other words, the Fourier series converges to f in the mean square sense if and only if Eq.(6.123) holds. If this happens for every choice of f, the set {φn(x)} is said to be closed.A closed set then is a set which can be used for mean square approximation of arbitraryfunctions. It can be shown that the set of trigonometric functions cos nx, sin nx is closedon 0 < x < 2π.

A set {φn(x)} is said to be complete if there is no non trivial function f(x) which is orthogonalto all the φ0ns. That is, the set is complete if

ck =

Z b

a

f (x)φ(x)dx = 0 for k = 1, 2, 3, . . . (6.124)

implies that

Z b

a

[f(x)]2dx = 0 (6.125)

Now, whenever Eq. (6.123) holds, Eq. (6.124) yields Eq. (6.125) at once, hence we have,

Corollary 5: Every closed set is complete. The converse is also true. This converse,however requires a more general integral than the Reimann. The generalized integral knownis Lebesgue integral. The notions of closure and completeness have simple analogs in theelementary theory of vectors. Thus a set of vectors v1, v2, v3 is said to be closed if everyvector V can be written in the form.

V = c1v1 + c2v2 + c3v3 (6.126)

for some choice of constants ck . The set of vectors v1, v2, v3 is said to be complete if there isno nontrivial vector orthogonal to all of them; that is the set is complete if the condition.

v.vk = 0 k = 1, 2, 3 . . . (6.127)

In this case, it is obvious that closure and completeness are equivalent, for both conditionssimply state that the three vectors v1, v2, v3 are not coplanar.

6.9 POWER IN A SIGNAL

Consider two voltage sources connected in series across a 1 ohm resistance. Let one sourcehave an emf of 10 cos 2πt and other an emf of 5 cos 20t. These two voltages do not make aperiodic function.

93

Page 94: Linear Transforms

If the power dissipated in the resistance at any moment is to be calculated, we have

p(t) =v2(t)

R= v2(t) = (10 cos 2πt+ 5 cos 20t)2 (6.128)

= 100 cos2 2πt+ 100 cos 2πt cos 20t +25 cos2 20t

= 50 + 12.5 + 50 cos 4πt+ 12.5 cos 40t

+ 50 cos(2π + 20)t+ 50 cos(2π − 20)t (6.129)

From Eq. (6.129), it is clear that 50 is the average power that would be dissipated in theload if 1Hz source acted alone and 12.5 is the average power if 10/πHz source acted alone.The total average power when both sources are present is the sum of the averages for bothsources acting alone. The instantaneous power is given by Eq. (6.129).

6.9.1 AVERAGE POWER IN A SIGNAL

Applying the Parseval equality

P av =

∞Xn=−∞

cnc−n =

∞X−∞

|cn|2 (6.130)

and root mean square value of f(t)

r.m.s. =

∞Xn=−∞

|cn|2 (6.131)

The expressions of Eq. (6.130) and Eq. (6.131) are for two sided spectrum.

For the positive frequency line spectrum

P = c20 +

∞Xn=−∞

1

2|2cn |2 = c20 + 2

∞Xn=1

c2n (6.132)

The Fourier series for the rectangular pulse train in Example 6.11 was

f (t) =Aτ

T

∞Xn=−∞

sin cnfejnωt

where

cn =Aτ

Tsin cnfτ

c0 =A

T

The ratio τ /T is called the duty cycle ‘d0

94

Page 95: Linear Transforms

thus, cn = dnA sin cnd

Then average power

Pav =

∞Xn=−∞

(dA)2 sin c2nd

Example 6.12

Consider the train of sinusoidal pulses in Fig. 6.9. Draw its amplitude spectrum and writethe expression for average power

cn =1

T

Z τ/2

−τ/2Acosωct cos nωtdt

=Aτ

2T[sin c(fc − nf)τ + sin c(fc + nf)τ ]

The average power Pav is

Pav =

∞Xn=−∞

c2n

and the amplitude spectrum is given in Fig. 6.10.

Example 6.13

The triangular wave is shown in Fig. 6.11 along with its Fourier series. Draw the powerspectrum of the function.

The Fourier series is

f (t) =1

T

∞Xn=−∞

cnejπnt

we can also define cn as

cn =

Z T

0

f (t)e−j2πnt

T dt

where c0 = 0, since the wave has no average value and

cn =

∙sin πn

2πn

2

¸2f or n 6= 0

Since all ck are real, T = 2. Therefore, the series can be written in cosine terms.

f (t) = c1 cos πt + c2 cos 2πt+ c3 cos 3πt+ . . .

95

Page 96: Linear Transforms
Page 97: Linear Transforms

Moreover, sin2(nπ/2) is zero when n is even and unity when n is odd. cn , therefore can bewritten as cn =

4(πn)2

for n= 1, 3, 5, . . .

Then the sinusoidal form of the Fourier series.

f (t) =4

π2(cos πt+

1

9cos 3πt+

1

25cos 5πt+

1

49cos 7πt + . . .)

The power spectrum (also called line spectrum) is obtained by

P avn =|cn|2T 2

=16

(nπ)4(2)2=

4

π4n4watts

The line spectrum is plotted in Fig.6.12.

6.10 PERIODIC SIGNAL AND LINEAR SYSTEMS

If the input to a stable linear network or system is periodic, the steady state output signalis also periodic with the same period. That this is true can be easily demonstrated by useof transfer functions.

If the system is linear, the transform of the output signal is related to the transform of theinput signal by the equation.

F0(s) =H(s)Fin(s) (6.133)

where H(s) is the transfer function of the system. Strictly speaking fin(t) cannot be periodicif it is to have a Laplace transform, but we can define:

fin(t) =u(t)

T

∞Xn=−∞

cnej2πnft

97

Page 98: Linear Transforms

cn =

Z T

0

f (t)e−j2πnt

T dt

as the input signal. Its transform is

Fin(s) =1

T

∞Xn=−∞

cns− j2πnf

(6.134)

with H(s) the transfer function of a linear system

F0(s) =1

T

∞Xn=−∞

cnH(s)

(s− j2πnf)(6.135)

F0(s) will have poles in the left half plane because of the poles of H(s). These will leadto transient terms. If we wish only the inverse transform of the steady state, we need onlythe inverse transform of the j-axis poles. If f0(t) is the periodic portion only of the inversetransform, then

f0(t) =1

T

∞Xn=−∞

cnH (j2πnf )ej2πnft (6.136)

and the only effect the system has on the series is to alter the amount of each frequency bythe transfer function evaluated at that frequency.

The power spectrum of the output signal is given by

Pavn =|cn|2H(j2πnf)|2

T 2(6.137)

Eqs.(6.136) and (6.137) represent the principal reasons for the use of Fourier series in signalanalysis. The steady state effect of a filter on a signal can be seen if we compare the powerspectrum of the signal with the frequency response of the filter. Multiplication of the twowill produce the spectrum of the output signal.

Example 6.14

The rectangular wave of Fig. 6.13 is the input signal (the current is) of the RLC tank circuitin Fig. 6.14. Find the Fourier series of the input and output signals and their spectra.

The Laplace transform of one period of input signal is

P (s) =esT/4 − e−sT /4

s

98

Page 99: Linear Transforms
Page 100: Linear Transforms

Setting s = j2πn

T, we have

cn =ejπn/2 − e−jπn/2

2πjn

T

= Tsin πn

2

If we divide by T , the Fourier series of the input signal is

is(t) =

∞Xn=−∞

sinnπ

2

nπej2πnt/T

Note that for small angles, sinx = x for small n, sin nπ

2= nπ

2, making the d − c term

(n = 0) = 1/2.

The transfer function of the network is

H(s) =s

c

s2 + ( s

RC) + 1

LC

Q0 = Rc

L= 1000

1.77x10−6

17.7x10−3= 10

so the roots of the denominator are very nearly

s =1

LC(− 120± j)

= 5.65x103(−0.05 + j)

If the frequency response versus frequency is plotted, a sharp resonance will be seen atf = 900Hz. At this frequency H(s) = R = 1000. Since the fundamental frequency of theperiodic wave is 1

T= 100Hz, the response of the network will be large at the ninth harmonic.

Substitution of even harmonics (n even) will yield zero, so the input signal consists only ofthe frequencies 100, 300, 500, 700, 900, 1100, . . . Hz.

If the numerator and denominator of H(s) are multiplied by RC

s, H(s) can be written

H(s) =R

1 + (RCs)(s2 + 1

LC)

and with

s =j2πn

T= j200πn

H(j2000πn) =1000

1 + ( j109)(n− 81

n)

substituting n = 1, 3, 5, 7, 9, 11 and computing only the magnitude of H(s), we obtain11.25, 37.5, 80.5, 193, 1000 and 240 respectively. The circuit is not an ideal band pass fil-ter, because frequencies other than 900 Hz get through, but it certainly shows a preferencefor 900 Hz.

100

Page 101: Linear Transforms

The Fourier series of the output is

f0(t) = v0t =1000sin nπ

2h1 + ( j10

9)(n− 81

9)iπn

.e1200nπt

The power spectrum of the input signal is simply a set of lines with height 1/4 at f = 0 and1/(πn)2 at the odd harmonics. This is shown in Fig. 6.15. The dashed line in Fig. 6.15 isthe magnitude of H(s) squared. Fig. 6.16 shows the power spectrum of the output signal.Note that it has no d − c term and that the line at n = 1 is 1/π2 times the square of themagnitude of H at n = 1; that is

(11.25)2

π2= 12.8

calculating the others in a similar way. We have: Pav3 = 15.8, Pav5 = 26.3, P av7 =76.7, Pav9 = 1250, P av11 = 48.1. If the sum of the powers in these harmonics is calculated,the total is 1429.7 watts per ohm (the actual power is one thousandth of this, since R = 1000and P = v2/R, and so about 87.5 percent of the power is in the ninth harmonic. Actually,something less than this value is in the ninth harmonic, since the power in 13th, 15th, . . .harmonics would have to be calculated to obtain the total output power. Since the outputis a voltage and ninth harmonic is dominant, the output voltage should be nearly a 900Hzsinusoidal with average peak amplitude of 2x1000

9π= 70.7 volts.

The RLC is approximately a band pass filter, and the assumption that only the ninthharmonic is passed leads to the result that

v(t) = 70.7 cos 1800 πt

in the steady state.

101

Page 102: Linear Transforms
Page 103: Linear Transforms

PROBLEMS

6.1 EvaluateR π

−π cosmx cos nxdx for integral m and n by use of the identity.

2 cosA cosB = cos(A+B) + cos(A− B)

6.2 Find the Fourier series for f (x) if

f(x) =

½π, for −π < x < π

2

0, for π

2< xπ

6.3 Find the Fourier series for the function defined by

f(x) =n0, for −π < x < 0sin x, for 0 < xπ

6.4 If

f(x) =n−x, for −π < x < 00, for 0 < xπ

Show that the corresponding Fourier series is

π

4− 2

π

∞Xn=1

cos(2n− 1)(2n− 1)2 +

∞Xn=1

(−1)n sin nxn

6.5 Classify the following functions as even, odd or neither

x2, x sin x, x3 cos nx, x4, ex, (x2)(sin x)2

6.6 Show that if

f(x) =

½x, for 0 < x < π

2

π − x, for π

2< xπ

then

f (x) =π

4− 2

π

∙cos 2x

12+

cos 6x

32+

cos 10x

52+ . . .

¸6.7 If f (x) is an odd function on (−T, T ) show that the Fourier series takes the form

f(x) =

∞Xn=1

bn sinnπt

T, bn =

2

T

Z T

0

f(x) sinnπx

Tdx

6.8 Find the Fourier series for the following function:

f (x) =n8, for 0 < x < 2−8, for 2 < x < 4

6.9 Write down the Fourier series for the waveforms shown in Figs. 6.17 (a)-(b)

103

Page 104: Linear Transforms
Page 105: Linear Transforms

6.10 For a one port network, it is given that

i = 10 cos t+ 5 cos(2t − 45o)

v = 2 cos(t+ 45o) + cos(2t+ 45o) + cos(3t − 60o)

a) what is the average power to the network.

b) Plot the power spectrum.

6.11 By using the following equations

ejµ = cos µ+ j sin µ

cn =1

Z π

−πf (t)e−jntdt

show that, 2cn = an − jbn, 2c0 = a0, 2c−n = an + jbn.

6.12 Determine whether f (t) is periodic. If it is, find its period and its fundamental frequency.Whether it is periodic or not, write the function in the exponential form and list allfrequencies contained within the function.

f (t) = 5 + 7 cos(20πt+35o) + 2 cos(200πt− 30o)

6.13 The current source in the circuit shown in Fig. 6.18 is a square wave whose Fourierseries is

is(t) =

∞Xn=−∞

cnej2πnt/T

with cn = 0 for n even, and

cn = Asin

2nπ

2

for n odd

a) Sketch is(t), v0(t).

b) Find the Fourier series of v0(t).

c) Write the first five nonzero terms of cosine series for v0(t).

d) Calculate Pav. for v0(t) if R = 100. Plot the power spectrum.

e) Since the square wave can be thought as successions of steps, the steady state termv0(t) must be a succession of step responses. Without using the Laplace transform,determine the waveshape of steady state.

105

Page 106: Linear Transforms
Page 107: Linear Transforms
Page 108: Linear Transforms

CHAPTER - VII

THE FOURIER TRANSFORMS

INTRODUCTION

In the preceding chapter on Fourier series, we have shown that the period can be extended fornon periodic signals and the resulting equations are called Fourier Transform pairs. Thesetransform pairs are extremely useful in dealing with the electromagnetic radiation, signaltransmission and filtering. The practicality of the use of Fourier transform is validated bythe fact that no practical signal is mathematically periodic, since all the signals (speeches,music and audio signals) have both beginnings and ends. Such signals may be strictly timelimited, so f (t) is identically zero outside a specified interval or asymptotically time limited,so f (t) → 0 as t→ ∞, or if f(t) is square integrable over all time, that is

limT→∞

Z T

−T|f(t)|2dt <∞ (7.1)

then the frequency domain description is provided by the transforms.

F (f ) = F [f (t)]

F (f) =

Z ∞

−∞f (t)e−jωtdt (7.2)

and

f(t) =

Z ∞

−∞F (f)ejωtdf (7.3)

Eqs.(7.2) and (7.3) are called the Fourier transform pairs.

7.1 AVERAGE VALUE AND ENERGY IN A NON-PERIODIC SIGNAL

Since the transforms are non-periodic functions, the average of the signal is defined as

< f(t) >= limT→∞

1

T

Z T

0

(t)dt (7.4)

and power

P =< f 2(t) >= limT→∞

1

T

Z T

0

f 2(t)dt (7.5)

The integral in Eq.(7.5) remains finite as P → 0 and T → ∞. Since time limited signalsmust have zero averages, when averaged over infinite time, average power is, therefore, notuseful and we turn to energy.

By definition, total energy E is the integral of instantaneous power. Assuming that f(t) isapplied to a one ohm resistor

108

Page 109: Linear Transforms

E =

Z ∞

−∞f2(t)dt (7.6)

Using Parseval’s theorem

E =

Z ∞

−∞f (t)

∙Z ∞

−∞F (f)ejωtdf

¸dt (7.7)

=

Z ∞

−∞F (f)

∙Z ∞

−∞f (t)e−(−jωt)dt

¸df (7.8)

=

Z ∞

−∞F (f)F (−f)df

=

Z ∞

−∞|F (f)|2df (7.9)

Eq. (7.9) is called Raleigh’s Energy Theorem. If f (t) is a voltage waveform, the F (f ) hasdimensions per unit frequency and describes the distribution or density of the signal voltagein frequency. By like reasoning |F (f)|2 is the density of energy in the frequency domain.Define S(f) as the energy spectral density

S(f) = |F (f)|2 (7.10)

S(f) is positive real. Moreover, if f(t) is real, F (f) is hermitian and S(ω) is even functionof frequency. The total energy is therefore,

E =

Z ∞

−∞S(f)df

= 2

Z ∞

0

S(f )df (7.11)

7.2 LINE SPECTRA VS. CONTINUOUS SPECTRA

Consider a narrow frequency interval ∆f central at f1 that is f1−1/2∆f < |f | < f1+1/2∆fand suppose that this interval includes the mth harmonic of a periodic signal f1 = mf0 .The frequency component of the periodic signal

cmejω1t + c−me

j(−ω1)t = 2|cm|cos(ω1t + φm )

so that, average power is 2|cm |2

For a non periodic signal, the frequency component represented by the interval

F (f1)(ejω1t∆f +F (−f1)ej(−ω1)t∆f = 2F (f1)∆f cos[ω1t+ArgF (f1)]

This interval contains energy approximately equal to

109

Page 110: Linear Transforms

2|F (f1)|2∆f

Therefore, a line spectrum represents a signal that can be constructed from a sum of discretefrequency components and the signal power is concentrated at specific frequencies. On theother hand a continuous spectrum represents a signal that is constructed by integratingover a continuum of frequency components and signal energy is distributed continuously infrequency.

Example 7.1

Consider the time limited pulse of Fig. 7.1 whose amplitude is A between −τ /2 and τ/2and zero otherwise. Draw its amplitude and line spectrum.

F (f) =

Z τ/2

−τ/2Ae−jωtdt

= Aτsin ωτ2ωτ

2

= Aτ sin cωτ

2(7.12)

S(f) = A2τ2 sin c2ωτ

2(7.13)

The graph of F(f ) and S(f) is given in Fig. 7.2 and Fig. 7.3.

In this example, 1/τ can be taken as the measure of spectral width. Now, if the pulse widthis increased and vice versa, this phenomenon is ”reciprocal spreading”.

Let us see the percentage of total energy contained in |f| < 1τ. Using energy spectral density:

E = 2

Z 1/τ

0

S(f)df

= 2(Aτ )2Z 1/τ

0

sin c2fτdf (7.14)

110

Page 111: Linear Transforms

= 0.92A2τ 2

= 0.92E

Thus over 90 percent of signal energy is contained in

|f| < 1

τ

Example 7.2

Find and draw the amplitude spectrum of a Gaussain pulse of Fig. 7.4.

f (t) = Ae−π(t/τ )2

F (f) =

Z ∞

−∞Ae−π(t/τ )

2 cos ωtdt (7.15)

= 2A

Z ∞

0

e−π(t/τ)2 cos ωtdt

= Ae−π(fτ )2

(7.16)

Eq. (7.16) is a Gaussian pulse in frequency as is shown in Fig. 7.5.

7.3 FOURIER TRANSFORM THEOREMS

Almost all Laplace transform theorems are also Fourier transform theorems. Those that arethe same will not be proved.

7.3.1 Multiplication by a scalar : If F (f ) is a Fourier transform of f(t), αF (f ) is a Fouriertransform of αf(t).

7.3.2 If G(f ) and F (f) are Fourier transforms respectively of g(t) and f (t), the Fouriertransform of

[g(t) + f (t)] is [G(f ) + F (f )]

7.3.3 If F (f) is the Fourier transform of f(t), the Fourier transform of ej2πf0t f(t) isF (f−f0). This is complex translation theorem of Laplace transforms with s = 2jπf0, f0 may be positive or negative real.

7.3.4 Real Translation: If F(f ) is the Fourier transform of f (t), the Fourier transform off(t− t0) is e

−j2πft0 F (f), where t0 may be positive or negative real.

7.3.5 If F (f) is the Fourier transform of f(t), the Fourier transform of df (t)/dt is j2πfF (f)

Note that this assumes that F (f) behaves like 1/f n , n ≥ 1 as f becomes large.Actually F (f ) can approach a constant as f becomes infinite.

7.3.6 If F (f ) is Fourier transform of f(t), the Fourier transform ofnR t

−∞ f(x)dxois

F (f)/(j2πf), provided this division by f does not produce a pole at f = 0.

111

Page 112: Linear Transforms
Page 113: Linear Transforms
Page 114: Linear Transforms

Thus F (f ) must vanish at f = 0 at least as rapidly as f. More exactly

limf→0

|F (f )f| <∞

7.3.7 If F (f) is the Fourier transform of f (t), the Fourier transform of tf(t) is

(j/2π)dF (f)/df .

7.3.8 If F (f) is the Fourier transform of f (t), F (f/a) is the transform of f(at)

7.3.9 If G(f) is the Fourier transform of g(t), g(−f ) is the Fourier transform of G(t).Thistheorem is peculiar to the Fourier transform and for the first time confuses the useof capital letters for transforms and lower case functions of time.

Proof: We know that

G(f) =

Z ∞

−∞g(x)e−j2πf xdx (7.17)

and

g(t) =

Z ∞

−∞G(y)ej2πtydy (7.18)

If in Eq. (7.18) t is replaced by −f

g(−f) =Z

G(y)e−j2πfydy

which is seen to be the definition of G(t).

Example 7.3

a) Since

F [p(t)] = Asin πf b

πf

Where p(t) is a rectangular pulse of height A and width b and centered on the origin,replacing f by t in the transform and t by −f in p(t) gives.

∙Asin πbt

πt

¸= p(−f) = p(f )

since the pulse p(t) is symmetric about t = 0 axis. Thus the Fourier transform ofAsin(πbt)/πt is A for |f| < b and 0 for |f | > b.

b) Since

F [e−atu(t)] = 1

j2πf + a

114

Page 115: Linear Transforms

then,

F∙

1

j2πt+ a

¸= eafu(−f)

The theorem actually works both ways since either t is replaced by −f and f by tor t is replaced by f and f by −t. For example, in the last pair above, put t for fon the right and obtain eatu(−t). Now place −f for t on the left and get the pair

F [eatu(−t)] = 1

−j2πf + a

7.3.10 If F(f) is the Fourier transform of f(t), the Fourier transform of f(−t) is F (−f ).Note that if f (t) is real, the conjugate F (f), F ∗(f ), is equal to F (−f).

Proof: Since

F (f) =

Z ∞

−∞f (x)e−j2πfxdx

then

F (−f ) =Z ∞

−∞f(x)ej2πf xdx

If the dummy variable of integration is changed to y = −x, dx = −dy . For x =−∞, y = +∞. For x =∞, y = −∞ .

F (−f) =Z −∞

∞f(−y)e−j2πfy (−dy)

The sign of the last integral can be changed if limits of integration are reversed, so

F (−f) =Z ∞

−∞f (−y)e−j2πfydy (7.19)

But Eq. (7.19) is, by definition, the Fourier transform of f (−t).

Example 7.4

a) Since the Fourier transform of e−αtu(t) is 1

j2πf+α, the Fourier transform of eαtu(−t)

must be 1

(−j2πf+α)

b) The Fourier transform of eαt cos tβ u(t) is

s +α

(s + α)2 + β2|s=j2πf =

j2πf + α

(j2πf +α)2 + β2

and the transform of

eαt cos(−βt)u(−t) = eαt cos β(t)u(−t)

is simply

115

Page 116: Linear Transforms

−j2πf + α

(−j2πf +α)2 + β2

7.3.11 If f(t) is a real and even function of t that is, if f(t) = f (−t) and if f(t) is trans-formable, its Fourier transform F (f ) is real and is an even function of f .

Proof: Let the Fourier transform of f (t)u(t) be F (f ). Then by Theorem 10, the transformf(−t)u(−t) is F(−f). But, f(t) = f(t)u(t)+f (t)u(−t) since u(t)+u(−t) = 1,. Theeven property of f (t) permits the second f (t) in the last equation to be replaced byf(−t), so

f(t) = f(t)u(t) + f(−t)u(−t)

andF (f) = F 0(f ) + F 0(−f )

Clearly, F(f) is even because if f is replaced by −f, the equation does not change.But since f(t)u(t) is real, F 0(−f) = F ∗(f) by Theorem 10. Then

F (f) = F 0(f ) + F0∗(f)

But the sum of any complex number and its conjugate is twice the real part of thenumber, so

F (f ) = 2Re[F 0(f)]

7.3.12 If f(t) is real and odd function of time that is,if f (t) = −f (−t), the transform off(t), if it exists, is imaginary and an odd function of f .

Proof: Evidently

f (t) = f (t)u(t) + f (t)u(−t)

and because f(t) is odd

f(t) = f (t)u(t)− f (−t)u(−t)

If F 0 0(f ) is the transform of f(t)u(t), the transform of f(t) is

F (f ) = F 00(f) − F 00(−f )

which is seen to be odd. The fact that f (t)u(t) is real means that F 00∗(f )−F 00(−f ),so

F (f ) = F 00(f) − F 00∗(f )

= 2jIm[F 00(f )]

Again there follows a corollary: If f (t) is an odd function of time and F (f ) is itstransform, the imaginary part of the transform of f(t)u(t) is −jF (f )/2

116

Page 117: Linear Transforms

Note that any function can be expressed as sum of an even and an odd functions,since for f(t) neither even nor odd.

f(t) = fe(t) + f0(t) (7.20)

with

fe(t) =f(t) + f (−t)

2(7.21)

and

f0(t) =f(t) − f(−t)

2(7.22)

Then Theorems 11 and 12 imply that if F (f) is the transform of f (t)

F [fe(t)] = Re[F (f)] (7.23)

F [f0(t)] = jIm[F (f)] (7.24)

7.4 SUMMARY OF FOURIER TRANSFORM THEOREMS

All the theorems in the preceding section are summarized by the following equations.

Let f(t) and g(t) be transformable functions with transforms F (f) and G(f) respectively.Then

F [af(t)] = aF (f) (7.25)

F [f (t) + g(t)] = F (f ) + G(f ) (7.26)

F£ej2πf0tft

¤= F (f − f0) (7.27)

F [f(t − t0)] = ej2πt0fF (f) (7.28)

F∙df(t)

dt

¸= j2πfF (f ) (7.29)

provided that fF (f) is bounded as f →∞

F∙Z t

∞f(x)dx)

¸=

F (f)

j2πf(7.30)

provided that F (f ) is bounded for f = 0

117

Page 118: Linear Transforms

F [tf (t)] =j

2πdF (f)

df(7.31)

F [f(at)] =F ( f

a)

a(7.32)

F [G(t)] = g(−f) (7.33)

F [G(−t)] = g(f) (7.34)

F [f(−t)] = F (−f ) (7.35)

F [fe(t)] = Re[F (f)] (7.36)

F [fe(t)] = jIm[F (f)] (7.37)

7.5 THE INVERSE FOURIER TRANSFORM OF A RATIONAL FUNCTION

The inverse transform of a rational function of f can be found by using a procedure almostexactly like that used for the Laplace transform.

If we use p = j2πf instead of s, so that f = −jp/2π and the Fourier transform can be writtenas a ratio of polynomials in p. The inverse transform of rational function p is then foundby a partial fraction expansion exactly like that used to obtain inverse Laplace transform offunction of s. The only new aspects of the procedure are as follows:

1. If there are poles on the j-axis of the p-plane, the function comes under the category ofspecial Fourier Transforms and caution should be exercised.

2. The inverse transforms of the terms in the partial fraction expansion that have left halfp-plane poles are exactly the same as those in the Laplace transform.

3. The Inverse transform of the terms in partial fraction expansion that have right half p-planepoles are also the same functions, as those in Laplace transform, but instead of multiplyingthem by u(t), we multiply by u(−t).

Example 7.5

Find the inverse Fourier Transform of

F (f) =1

24π2f2 + j2πf

Setting f = −jp2π

F (f) =1

2 − p2 + p=

−1(p+ 1)(p− 2)

=13

p + 1−

13

p− 2

f(t) =1

3[e−tu(t) + e2tu(−t)]

Example 7.6

Find the Inverse Fourier Transform of

F (f) =A

f4 + a4

118

Page 119: Linear Transforms

where A and a are real positive constants.

Replace f by −jp/(2π). ThenF (f) =

A£−jp2π

¤4+ a4

=(2π)4A

p4 + (2πa)4

The roots of the denominator are

p4 = −(2πa)4, p2 = ±j(2πa)2

Since the square roots of +j are ±(1 + j)/√2, and the square roots of −j are ±(1− j)/

√2,

the four roots are√2πa(1 + j),

√2πa(1− j),

√2πa(−1 + j), and

√2πa(−1− j).

In factored form, we have

F(f ) =16π4A

(p +√2πa − j

√2πa)(p−

√2πa− j

√2πa)(p+

√2πa + j

√2πa)(p−

√2πa+ j

√2πa)

If this is expanded in the partial expansion, only the terms with −j in the factors arerequired.

F (f) = 16π4A

"1 6 −45o32π3a3

p +√2πa − j

2πa

+16 −135o32π3a3

p+√2πa − j

2πa

+ conjugate

#

=πA

2a3

∙1 6 −45o

p+√2πa − j

√2πa− 1 6 45o

p−√2πa− j

√2πa

+ conjugate

¸so,

f (t) =πA

a3

he−√2πat cos(

√2πat − 45o)u(t) + e

√2πat cos(

√2at+ 45o)u(−t)

iFor negative t, the cosine in the second term can be written as

cos(−√2πa|t|+ 45) = cos(

√2πa|t|− 45o)

since cosine is an even function. Then f (t) may be written as

f (t) =πA

a3e−√2πa|t| cos

h√2πa|t|− 45o(t)

iand it is seen to be an even function of t, as it should be with F (f) being real.

7.6 CONVOLUTION

Suppose that a linear system is excited by an input signal fin(t), then the output f0(t) willbe some function that depends upon the transfer function of linear system as well as uponthe input signal. As already discussed in the treatment of Laplace transform

F0(s) = Fin(s)H(s) (7.38)

119

Page 120: Linear Transforms

and

f0(t) =

Z ∞

−∞fin(τ )h(t− τ)dτ (7.39)

In a similar manner, convolution for the Fourier transform can be defined.

Theorem 7.1: Let f1(t) and f2(t) be Fourier transformable with transforms F1(f)and F2(f) respectively

The Fourier transform of

f(t) =

Z ∞

−∞f1(τ )f2(t− τ )dτ is F1(f)F2(f)

Proof:

By definition, the Fourier transform of y(t) is

Y (f ) =

Z ∞

½Z ∞

−∞f1(τ )f2(x− τ )dτ

¾e−j2πf xdx

If the integrated integral is expressed as a double integral

Y (f) =

Z ∞

−∞

Z ∞

−∞f1(τ )f2(x− τ )e−j2πfxdxdτ

If the new variable ξ = x − τ is substituted for x, then dx = dξ and the limits on the xintegration become −∞ − τ and ∞− τ for the ξ integration. But for fixed τ , this will besame as −∞ and ∞, so

Y (f) =

Z ∞

−∞

Z ∞

−∞f1(τ )f2(ξ)e

−j2π(ξ+τ)dξdτ

But the integrand can now be separated into two functions, one in τ alone and one in ξ alone

Y (f) =

Z ∞

−∞

©f1(τ)e

−j2πdτdτª½Z ∞

−∞f2(ξ)e

−j2πfξdξ

¾But this is the product of two Fourier transforms F1 and F2 , so

Y (f ) = F1(f)F2(f)

Since F1(f )F2(f) and F2(f)F1(f ) must be equal, the order in which the functions are chosenis immaterial, Z ∞

−∞f1(τ)f2(t− τ )dτ =

Z ∞

−∞f2(τ )f1(t− τ)dτ

120

Page 121: Linear Transforms

Theorem 7.2: If f1(t) and f2(t) are Fourier transformable, with transforms F1(f)and F2(f) respectively the Fourier transform of f1(t)f2(t) is given by.

F [f1(t)f2(t)] =Z ∞

−∞F1(y)F2(f − y)dy (7.40)

Example 7.7 Let f1(t) = e−tu(t) and f2(t) = e−2tu(t + 2) convolve f1 and f2 Let y(t) bethe result of convolving f1 and f2.

Then,

y(t) =

Z ∞

−∞e−τu(τ )e−2(t−τ )u(t− τ + 2)dτ

=

Z ∞

−∞e−τe−2te+2τu(τ )u(t− τ + 2)dτ

= e−2tZ ∞

−∞eτu(τ)u(t− τ + 2)dτ

Let y = τ − 2 or τ = y +2

y(t) = e2e−2tZ ∞

−∞eyu(y +2)u(t− y)dy

But the integrand is zero for y > t, so u(t− y) can be dropped if the upper limit is changedto t. Then

y(t) = e2e−2tZ t

−∞eyu(y + 2)dy

The lower limit may now be changed to −2, and

y(t) = e2e−2tu(t+ 2)

Z t

−2eydy

= [e2e−t − e−2t] u(t+ 2)

7.7 SOME SPECIAL FOURIER TRANSFORMS

7.7.1 THE UNIT IMPULSE

The unit impulse function δ(t) presents some difficulties. If the function is transformeddirectly

F [δ(t)] =Z ∞

−∞δ(x)e−j2πfxdx = 1 (7.41)

as in the case of Laplace transform. The difficulty arises when we attempt to inverse trans-form

δ(t) =

Z ∞

−∞ej2πtydy

121

Page 122: Linear Transforms

This can only be demonstrated indirectly since the integral does not simply exist in theordinary sense. If 1 is the Fourier transform of δ(t), then according to Eq. (7.34) δ(f) isthe transform 1. It is convenient to define δ(t) as the limiting form of any function of timewhose transform approaches unity. There are many such functions and few are listed here.

1. The tall rectangular pulse of height 1 and width . Its transform is known to be (sin π f)/π f . which certainly approaches unity for any f if is made sufficiently small.

2. The tall triangular pulse of height 1 and width 2 , which has a transform of (sin2 π f)/(π f)2

. This approaches unity for any fixed f so long as is made sufficiently small.

3. The function

e−t2

/2σ2√2πσ

has a transform e−2π2σ2f2 and can be made to approach unity for any value of f , however

large, if σ is made small enough.

Then

δ(t) = limσ→0

e−t2

/2σ2√2πσ

(7.42)

is another choice for the definition of the delta function.

4. Since the delta function is real and even any transform that is real and even and thatapproaches unity in a limit can be used for the transform of δ(t). If we consider the function[U (f − f1)], which is unity in the region −f1 < f < f1 , so the transform approaches unityfor all f as f1 becomes infinite.

For any finite f1 , the inverse transform is

Z f1

−f1ej2πtydy =

sin 2πf1t

πt

Thus we might define

δ(t) = limf1→∞

sin 2πf1t

πt

Although this is a useful definition mathematically, it is somewhat awkward to visualize asa function of time. If the numerator and denominator are multiplied by 2f1 , the relationreads

δ(t) = limf1→∞

2f1

∙sin 2πf1t

2πf1t

¸(7.43)

Which is known to have value of 2f1 at t = 0. Furthermore, the area under the function canbe shown to be unity. Thus the function becomes infinitely high at t = o and it does havethe proper area under it. However, the envelope of the function does not go to zero as f1becomes infinite.

122

Page 123: Linear Transforms

5. Another possibility is the transform pictured in Fig. 7.6. Again as f1 becomes infinite, thetriangle approaches unity for all f , though it never gets there for any f but zero. We needtransform the positive position only; then we take twice the real part, since the transform isreal and an even function of f. In the positive region, the function is given by

1 − f

f1[u(f )− u(f − f1)]

and the inverse transform is the same as the transform of

1− t

f1[u(t) − u(t− f1)]

with f replaced by t. The Laplace transform of the triangle is

1

s− 1

f1s2+

e−sf1f1s2

with s = j2πf , the Fourier transform is

1

j2πf− 1

f1

∙ −1(2πf)2

¸ £1 − e−j2πf1f

¤Only the real part is needed, and thus 1/(j2πf) can be dropped. Since

1− e−j2πf1f = 1 − cos(2πf1f ) + j sin(2πf1f)

the real part of the transform is simply

1− cos(2πf1f)

(2πf)2f1=2 sin2(πf1f)

(2πf)2f1

Replacing f by −t and doubling the result gives.

4 sin2πf1t

(2πt)2f1

as the inverse transform of the function of Fig. 7.6.

6. There are many new definitions that could be construed for the delta function. For instance

e|−f /f1|

is an even function of f and approaches unity as f1 becomes infinite. Its inverse transformcan be found by noting that the Fourier transform of e−α|t| is 2α

α2+4π2f 2. If we write −t for f

and 1/f1 for α to obtain the inverse transform of e−|f/f1|.

This inverse transform is then

123

Page 124: Linear Transforms

2f1h

1

f1

i2+ 4π2t2

Since f1 is to be made large, we might let α = 1/(2πf1), and as f1 becomes large α willbecome small. Then the last expression becomes

2(2πα)

(4π2α2 +4π2t2)=

α

π

α2 + t2

Then

α(t) = limα→0

α

π

α2 + t2(7.44)

This definition agrees well with the initial concept of the impulse, since at t = 0 it has avalue 1/πα, which becomes large as α → 0. Also for t 6= 0, the function approaches α/(πt)2or zero as α → 0.

7.7.2 THE STEP FUNCTION

Since the Fourier transform of unity has been defined as δ(f), it should be possible to definethe Fourier transform of the step function. Before we begin, the even part of the step functionis

1

2[u(t) + u(−t)] = 1

2

The real part of the Fourier transform of u(t) is δ(f )/2. The transform of u(t) does not existin the strict sense. The transform of e−αtu(t) does exist; it is 1/(j2πf + α). This functionapproaches the step function as α approaches zero, so the transform of step function mustbe 1/(j2πf). But this has no real part, yet it has already been established that the real partis δ(f )/2.

If what has been done is correct,

F [u(t)] = limα→0

1

j2πf + α

Multiplying numerator and denominator by (α − j2πf), we have.

F [u(t)] = limα→0

½α

α2 + (2πf )2− j2πf

α2 + (2πf)2

¾The limit of the imaginary part, as indicated before, is 1

(j2πf), but the real part is, after

dividing numerator and denominator by 4π2

Re{F [u(t)]} = limα→0

α(4π2)

α2(4π2) + f2

If we set a = α2π, a→ 0 as α→ 0, so

124

Page 125: Linear Transforms

Re{F [u(t)]} = limα→0

a

a2 + f2

But this is exactly one half the expression for δ(f) given in Eq. (7.44) with t replaced by f .The real part of the transform is indeed δ(f )/2 then

F [u(t)] = δ(f)

2+

1

j2πf(7.45)

7.7.3 THE SIGNUM FUNCTION

The function ”signum of t” abbreviated sgn(t) is equal to +1 when t is positive, −1 when tis negative and 0 when t = 0. Thus

Sgn(t) = u(t) − u(−t) (7.46)

This function is Fourier transformable in the sense that a constant and a step function aretransformable. So that

F [sgn(t)] = δ(f )

2+

1

j2πf−∙δ(−f )2− 1

j2πf

¸=

1

jπf(7.47)

Poles on the j axis in the p plane are transformable then, with p = j2πf

F∙Sgn(t)

2

¸=1

p(7.48)

F∙ej2πf0t

sgn(t)

2

¸=

1

p − j2πf0(7.49)

Example 7.8 Find the inverse Fourier transform of

4p4 − 2p3 + 6p2 − 66p− 18p(p +1)(p − 2)(p2 + 9)

Writing the partial fraction expansion of the above expression

1

p+

2

p +1− 1

p− 2 +1 − j

p − 3j + conjugate

1. The left half plane poles have the same inverse transform as the Laplace transform; hence

F−1∙2

p+ 1

¸= 2e−tu(t)

125

Page 126: Linear Transforms

2. The right half p-plane poles have the same inverse transform as the Laplace transform, butwith u(t) replaced by −u(−t).

F−1∙−1p− 1

¸= e2tu(−t)

3. The j axis p-plane poles have the same inverse Laplace transform, but with u(t) replaced bysgn(t)/2. Hence

F −1∙1

p+1− j

p− 3j + conjugate

¸= [1 + 2cos(3t− 45)] Sgn(t)

2

7.7.4 PERIODIC FUNCTION

Since the transform of ej2πf0t is understood to be (f − f0), any periodic or almost periodicfunction that is expressible in an exponential series has a Fourier transform. Thus with

f(t) =

∞Xk=−∞

Bkej2πfk (7.50)

F [f(t)] = F (f) =

∞Xk=−∞

Bkδ(f − fk) (7.51)

126

Page 127: Linear Transforms
Page 128: Linear Transforms

PROBLEMS

7.1 Find the Fourier transform of the functions pictured in Fig. 7.7.

7.2 Find the Fourier transforms of

a) e−a|t|

b) sin(βt)hu(t+ 2πn

β) − u(t− 2πn

β)i, n is an integer

7.3 Using the transform of triangular pulse in Fig. 7.7(a) and Eq. (7.33), find the Fouriertransform of (sin2at)/t2 .

7.4 Find the Fourier transforms of

a) e−atu(t)− eatu(−t)

b) 1

a2+t2

7.5 Find the Fourier transforms of

a) e−atu(t+ b), a and b are positive

b) eat[u(t)− u(t− b)]

7.6 Find the inverse Fourier transform of the following functions.

a) B

f2+a2

b) jBf

f2+b2

c) p2−4(p+1)(p+2)(p−3)

where B and b are real and p = j2πf

7.7 Find the inverse Fourier transforms of

a) 1−π2f 2

f4+1.25f2

π2+ 0.25

π4

b) 1

(πf+j)2

7.8 Find the inverse Fourier transforms of

a)[ a2π ]

2−f2h

f2+{ a2π}2

i2b)

− jafπh

f2+{ a2π}2

i27.9 Convolve by direct integration, and check by transforms, the functions

a) e−2tu(t) and e3tu(t)

b) e−tu(t) and sin2t u(t)

c) etu(−t) and e−tu(t)

128

Page 129: Linear Transforms

7.10 Let f(t) be Laplace transformable with transform F (s) and having σe = c, show that regard-less of the sign of c, e−btf(t) has a transform, provided that b > c. Show that this Fouriertransform is F (j2πf + b).

7.11 Use the theorem on ”complex translation” to demonstrate that

L[f1(t)f2(t)] =1

2πj

Z b+j∞

b−j∞F1(ω)F2(s− ω)dω

provided that b is greater than the abscissa of exponential order of either f1 or f2 .

7.12 The auto correlation function φ(τ ) of a function f(t) is defined by

φ(τ ) =

Z ∞

−∞f(x)f (x− τ )dx

a) Show that φ(τ ) is an even function of τ .

b) Show that φ2(f ), the Fourier transform of φ(τ), is |F (f )|2 .

c) Demonstrate that several functions can have the same correlation function. Hint: It isnecessary to show only that several functions have different transforms but the same F (f )|2.

7.13 Given that the Fourier transform of unity is δ(f ), find the Fourier transforms of cos(2πf0t)and sin(2πf0t).

7.14 Find the inverse Fourier transforms of

a) p

p2+β2

b) β

p2+β2

c) 2

p(p+1)

129

Page 130: Linear Transforms

CHAPTER - VIII

APPLICATIONS OF THE FOURIER TRANSFORM

8.1 MODULATION

Let S(t) be a signal whose spectrum lies in the vicinity of carrier frequency fc , a frequencyhigh enough that radiation is economical and practical. The most general signal that satisfiesthis requirement is the function

S(t) = A(t) cos [2πfct+ θ(t)] (8.1)

Where A(t) is some arbitrary function of time whose spectrum does not exceed the frequencyfc (it is usually small compared to fc) and θ(t) is signal whose maximum derivative does notexceed in magnitude the value 4πfc/3 and whose spectrum does not exceed fc/2.

If an audio or video signal Sm(t) is to be carried by the function of Eq. (8.1) either A(t)must be some function of Sm(t) or θ(t) must be some function of Sm(t).

IfA(t) is a function of Sm(t) and θ(t) is a constant, the result is called amplitude modulation.If θ(t) is a function of Sm(t) andA(t) is a constant, the result is called angle modulation. Thetwo most common examples of angle-modulation are phase modulation (PM ) and frequencymodulation (FM ).

In the following subsections, the Fourier spectrum will be used interchangeably with linespectrum or the energy spectrum. This will simply be the transform of the signal, which isplotted graphically in magnitude, or will simply be the square root of the energy spectrum.In case of sinusoids, the Fourier spectrum will be a pair of impulses.

8.1.1 Amplitude Modulation

An arbitrary message x(t) can represent the ensemble of all probable messages from a givensource. Assume that the messages are bandlimited in ω, above which spectral content isnegligible and unnecessary.

X(f) = 0 for |f| > ω (8.2)

Also, let the message be scaled to have a magnitude not exceeding unity.

|x(t)| < 1 (8.3)

or

< x2(t) >≤ 1 (8.4)

The ensemble average then also satisfies

X2(t) ≤ 1 (8.5)

130

Page 131: Linear Transforms

The envelope of the modulated carrier has the same shape as the message waveform. Themodulated signal is

Xc(t) = A [cosωct+mx(t)cosωct] (8.6)

= Ac[1 +mx(t)]cosωct

where Accosωct - unmodulated carrier

m - modulation index

The modulated amplitude

Ac(t) = Ac[1 +mx(t)]

fc >> ω and m ≤ 1

When m = 1, 100 percent modulation takes place and amplitude varies from 0 to 2Ac .If m > 1, overmodulation takes place, which results in carrier phase reversal and envelopdistortion. The message signal and the modulated signal are shown in Fig. 8.1.

The Fourier transform of Eq. (8.6) is

Xc(f ) =Ac

2[δ(f − fc) + δ(f + fc)] +

mAc

2[x(f − fc) + x(f + fc)] (8.7)

The spectrum of the modulated signal is shown in Fig. 8.2

The properties of Xc(f) in Eq. (8.7) are as follows:

1. It is symmetric about the carrier frequency with amplitude as even function and phase asodd function.

2. Transmission bandwidth BT required for an AM signal is exactly twice that of messagebandwidth.

The average transmitted power PT

PT = < X2c (t) >= A2

c < [1 +mx(t)]2cos2 ωct > (8.8)

=A2

c

2{< 1 + 2mx(t) +m2 < x2(t) > + < [1 +mx(t)]2 cos 2ωct >}

Since fc >> ω, the second term averages to zero.

If the d − c component of the message is also zero, then PT

PT = [1 +m2 < x2(t) >]A2

c

2(8.9)

If the message source is ergodic

PT =£1 +m2x2

¤ A2c

2= Pc + 2PsB

131

Page 132: Linear Transforms
Page 133: Linear Transforms

Where x2 is the ensemble average, and carrier power Pc is12A2

c . Power in each side band.

PsB =m2x2A2

c

4=1

2m2x2Pc <

1

2Pc

This implies that at least 50 percent of the total power resides in the carrier, which is wasted.

The maximum voltage Xcmax = 2Ac and , therefore, peak instantaneous power is proportionalto 4A2

c .

Example 8.1

If x(t) = A cos 2πfmt, and carrier is Ac cos ωct, find and draw the modulated signal.

xc(t) = Ac(1 +mAm cos ωmt) cos ωct

= Ac cos ωct+mAmAc

2[cos(ωc − ωm)t+ cos(ωc + ωm)t]

The spectrum of the modulated signal is given in Fig. 8.3.

133

Page 134: Linear Transforms

8.1.2 Double Sideband Suppressed-Carrier Modulation (DSB)

The carrier frequency component is independent of message and represents wasted power,therefore, it can be eliminated from the modulated wave without losing any information.consider, Eq. (8.6)

Xc(t) = Accosωct+mx(t)Accosωct

Dropping the first term and m in the above equation, we get

Xc(t) = x(t)Accosωct (8.10)

so that,

Xc(t) = 0 when x(t) = 0

The average transmitted power for Eq. (8.10) then is

PT = 2.PSB =X 2A2

c

2

The peak power is proportional to A2c then,

Xc(f ) =Ac

2[X(f − fc) +X(f + fc)] (8.11)

The spectrum of the modulated signal is shown in Fig. 8.4.

It can be seen that the bandwidth remains unchanged i.e. BT = 2ω and AM and DSB arequite similar in the frequency domain. However, they are quite different in the time domain.

8.1.3 Balanced Modulator for DSB

The DSB is obtained by using two AM modulators arranged in a balanced configuration tocancel out the carrier. The arrangement is shown in Fig. 8.5.

Assuming that AM modulators are identical save for the reversed sign of one input, theoutputs are

Ac

∙1 +

1

2x(t)

¸cos ωct and Ac

∙1− 1

2x(t) cos ωtc

¸Subtracting the two components, we get

Xc(t) = x(t)Ac cos ωct

8.1.4 Single Sideband Modulation

The upper and lower side bands of AM and DSB are uniquely related by symmetry, giventhe amplitude and phase of one, we can always construct the other.

Therefore, transmitting both bands is a waste of bandwidth. Total elimination of carrierand one sideband from AM spectrum produces SSB for which

134

Page 135: Linear Transforms
Page 136: Linear Transforms
Page 137: Linear Transforms

BT = ω and PT = PsB =X2

4Ac

The arrangement for obtaining SSB is shown in Fig. 8.6 and the spectrum is shown inFig. 8.7.

The SSB is widely used for transoceanic radio telephone circuits and wire communication.A balanced SSB modulator is shown in Fig. 8.8

Example 8.2

A modulating signal x(t) = cos 440πt+ cos 880πt is multiplied by a carrier fc = cos 2x106πt.The resulting DSB−SC is filtered so that only the frequencies less than 1MC are retained.What will the output be now when the signal is multiplied by cos 2π(106 − 110)t and thefrequencies above the audio range are filtered out. This will indicate what happens whenmusic is received by a receiver with drifting oscillator.

Solution:

The DSB is given by the equation

(cos 440πt+ cos 880πt) cos 2πx106t

=1

2[cos 2π(106 + 220)t− cos 2π(106 − 220)t] + 1

2[cos 2π(106 +440)t− cos 2π(106 − 440)t]

Filtering, so that frequencies below 1MC are retained, we get

−12[cos 2π(106 − 220)t+ cos 2π(106 − 440)t] (8.12)

Multiply Eq. (8.12) by cos 2π(106 − 110)t, we get

Xc(t) =1

4[cos 2π.110t+ cos 2π.330t]

=1

4[cos 220πt + cos 660πt]

This indicates the deterioration in SSB because of corruption of carrier by 110Hz.

8.1.5 Demodulation or Detection

The process of separating a modulating signal from a modulated carrier is called detection.This is inverse of modulation and requires time varying or nonlinear devices.

In normal AM detection, the modulating signal is recovered by applying Xc(t) to a half waverectifier. The output is then filtered to provide the desired modulating signal. A scheme ofdemodulation is shown in Fig. 8.9.

The diode in Fig. 8.9. can be treated as a piece wise linear device where switching takesplace at carrier frequency f .

137

Page 138: Linear Transforms

Thus,

Xc(t) = k[1 +mx(t)]cosωct (8.13)

and

X 0c(t) = k[1 +mx(t)]cosωct.S(t) (8.14)

The filtering is carried out by the low pass filter with time constant R1C1. The time constantis much larger than 1/fc and smaller than 1/ω. R2C2 acts as a d.c. block to remove the biasof unmodulated carrier component. S(t) is the switching function. It can be shown that theoutput of Fig. 8.8. contains a component proportional to x(t) plus higher frequency terms.The capacitor serves to filter out the higher frequency terms.

In case of SSB detection, the carrier must be supplied at the receiver before detection cantake place. The sum of the signal and locally generated carrier could be rectified to select thecomponents corresponding to the desired modulating signal. It is more common in practiceto use the carrier to shift the SSB signal to required audio band by using a frequencyconverter. The problem of providing a carrier at the receiver of exactly the right frequencyhas been a block in the wide spread use of SSB.

8.1.6 Frequency Modulation.

In this type of modulation, the frequency of the carrier is caused to vary according to themodulating signal x(t). Thus the frequency of the carrier is ωc + kx(t).

Strictly speaking, we can talk of only sine(cosine) waves for understanding this type ofmodulation. If the angle varies linearly with time, the frequency can be expressed as thederivative of the angle. Thus

fc(t) = cosθ(t) = cos(ωct+ θ0) (8.15)

When θ(t) does not vary linearly, we can obviate this difficulty by defining instantaneousradian frequency ωi to be the derivative of the angle as function of time

ωi =dθ(t)

dt(8.16)

If θ(t) is now made to vary in some manner with a modulating signal f (t), we call theresulting form of modulation as angle modulation.

In particular, ifθ(t) = ωct+ θ0 + k1x(t) (8.17)

Where k1 is a constant of the system. Here the phase of carrier varies linearly with modu-lating signal. Now let the instantaneous frequency vary linearly with x(t)

ωi = ωc + k2x(t)

θ(t) =

Zωidt = ωct+ θ0 + k2

Zx(t)dt (8.18)

This gives rise to FM system. Both phase and frequency modulation are special cases ofangle modulation.

138

Page 139: Linear Transforms

Since FM is a nonlinear process, new frequencies are generated by the modulating process.

As the simplest example, consider a sinusoidal modulating signal at fm

X(t) = a cos ωmt

The instantaneous radian frequency ωi

ωi = ωc +∆ω cosm t ∆ω << ω

where

∆ω is a constant depending on the amplitude 0a0 of the modulating signal and circuit con-verting variations in signal amplitude to corresponding variations in carrier frequency. Thusωi varies around ωc at the rate of ωm and with maximum deviation ∆ω, ∆f = ω/2π givesmaximum frequency of deviation called frequency deviation.

The phase variation θ(t) for this special case

θ(t) =

Zωidt = ωct+

∆ω

ωm

sin ωmt+ θ0

Here, θ0 may be taken as zero by referring to an appropriate phase reference, so that

Xc(t) = cos(ωct+ β sin ωmt), β =∆ω

ωm

=∆f

fm(8.19)

β is called modulation index and represents maximum phase shift of the carrier. Thusthe bandwidth of FM depends on β. The average power associated with the FM carrieris independent of modulating signal and is the same as average power of the modulatingcarrier.

The average power over a cycle for 1 ohm is

1

T

Z T

0

X2c (t)dt =

1

T

Z T

0

cos2(ωct+ β sinωmt)dt (8.20)

Where

T =1

fm

=1

T

Z T

0

1 + cos(2ωct+ 2β sinωmt

2dt

=1

2watt

If the amplitude of the carrier is Ac, the average power is 1/2 A2c . This result is true for

general form of signals.

8.1.7 Narrowband FM

In this case β << π/2. The equations for narrowband FM appear in the form of productmodulator of AM and give rise to sideband frequencies equally displaced about the carrier.In this case β is usually smaller than 0.2. So that

139

Page 140: Linear Transforms

Xc(t) = cos(ωct+ β sin ωmt)

= cos ωct cos(β sinωmt) − sin ωct sin(β sinωm t) (8.21)

For β << π/2

cos(β sin ωmt) = 1, and

sin(β sin ωmt) = β sin ωmt

Eq. (8.21) can now be written as

Xc(t) = cos ωct| {z }Carrier

−β sin ωmt sin ωct| {z }Sideband f requencies

Thus the BW of narrowband FM is 2.fm .

For general signalωi = ωc + k2x(t)

θ(t) =

Zωidt = ωct+ θ0 + k2

Zx(t)dt

Taking θ = 0 andRx(t)dt = g(t)

Xc(t) = cos [ωct+ k2g(t)]

If k2 and the amplitude of G(t) are small, so that

|k2g(t)| <<π

2

Xc(t) = cos ωct− k2g(t)sin ωct (8.22)

The bandwidth is again 2fm, where fm is the highest frequency component of either g(t) orits derivative x(t). In FM , carrier and sideband terms are in phase quadrature. Whereas inAM, carrier and sidebands are in phase. This is demonstrated by Fig. 8.10.

Xc(t) = cos ωct− β sinωmt sin ωct

= cos ωct−β

2[cos(ωc − ωm)t− cos(ωc +ωm)t]

= Re

∙ejωct(1 − β

2e−jωmt

β

2ejωmt)

¸narrow band FM

Xc(t) = cos ωct+mx(t) cos ωct (AM )

8.1.8 Wideband FM

The advantage of noise and interference reduction of FM over AM becomes significant forβ >> π/2. The bandwidth required to pass this signal becomes correspondingly large.Consider β >> π/2

Xc(t) = cos ωct cos(β sin ωmt) − sinωct sin(β sin ωmt)

140

Page 141: Linear Transforms
Page 142: Linear Transforms

Since β is significant

cos(β sin ωmt) ' 1−β2

2sin2 ωmt β2 << 6

If we assume β <<√6 and retain just the first two terms, we get the additional term

sin2ωmt cos ωct in Xc(t)

so that

Xc(t) = (1−β2

4) cos ωct−

β

2[cos(ωm − ωc)t− cos(ωc +ωm)t]

+β2

8[cos(ωc + 2ωm)t+ cos(ωc − 2ωm)t] (8.23)

The amplitude spectrum of Eq. (8.23) is shown in Fig. 8.11.

Note that the carrier term has decreased somewhat with increasing β. For a fixed modulatingfrequency, β is proportional to the amplitude of the modulating signal. Since the averagepower is constant, increase in bandwidth and sidebands is accompanied by decrease in powerin the carrier and hence the amplitude of the carrier is decreasing.

As β increases further, we require more terms in power series expansion of both cos(β sinωmt)and sin(β sin ωmt) and bandwidth begins to increase with β.

Consider,Xc(t) = cos(ωct+ βsinωmt)

= cos ωct cos(β sinωmt) − sin(β sin ωmt) sinωct

Both cos(β sin ωmt) and sin(β sin ωmt) are periodic functions of ωm and each may be ex-panded in Fourier series of period 2π/ω.

Each term will have terms in ωm and all its harmonics and each harmonic multiplied bycosωct or sinωct gives rise to two side bands symmetrically situated about ωc . The side-bands for sinωct will be quadrature apart, whereas, sideband for cos ωct will be in phase.Let us consider cos(β sin ωmt)

1. For β < 0.50, the curve can be represented by d.c. component plus a small component attwice the fundamental frequency.

142

Page 143: Linear Transforms

2. For β > π/2, the function remains positive and appears as a d.c. component with some ripplesuperimposed. If we multiply this by cosωct, the carrier term decreases with increasing βand sidebands increase.

3. For β > π/2, the function takes on negative values, as β increases positive and negativeexcursions become more rapid. So, at β = π/2 transition from more or less slowly varyingperiodic time function with most of spectral energy in its carrier to a rapidly varying functionwith the spectral energy spread over a wide range of frequencies.

Consider sin(β sinωmt), the frequency components are all old integeral multipliers of ωm,so that they give rise to odd order sidebands about the carrier. We can decrease the carrierpower (wasted power) considerably by increasing β.

Consider periodic complex exponential

V (t) = ejβ sinωmt, −T2< t <

T

2(8.24)

The real part of Eq. (8.24) gives cosine terms and the imaginary part gives sine terms.

Cn =1

T

Z T /2

−T /2ej(β sin ωmt−ωnt)dt (8.25)

ωm =2π

T

ωn = nωm =2πn

T

=1

2

Z π

−πej(β sin x−nx)dx x = ωmt (8.26)

The integral in Eq. (8.26) can be evaluated only as an infinite series and is called Besselfunction of the first kind and is denoted by

Jn(β) =1

Z π

−πej(β sin x−nx)dx

A circuit for direct FM modulation is given in Fig. 8.12.

8.1.9 Frequency Demodulation

The demodulation process must provide an output voltage (current ) whose amplitude islinearly proportional to the frequency of the input FM signal. This device is called afrequency discriminator. A circuit of a discriminator is shown in Fig. 8.13.

8.1.10 Pulse Modulation

Consider a periodic function δp(t) consisting of impulses occuring every Ts sec and havingan area of Ts units. This function is shown in Fig. 8.14.

The Fourier series of δp(t) can be obtained by finding the Laplace transform of one periodand letting s = j2πk/Ts. But Laplace transform of one period is simply Ts. Then Ck = Tsand the exponential series is

143

Page 144: Linear Transforms
Page 145: Linear Transforms

δp(t) =1

T

∞Xk=−∞

ej2πkt/Ts

=

∞Xk=−∞

ej2πkt/Ts (8.27)

If a function Sm (t) is multiplied by δp(t), the product will be a series of impulses seperatedby TS sec., but whose areas are now Ts times the height of amplitude of Sm(t) evaluated atthe time of concurrence of each impulse. The result is only a sampling Sm(t) at the samplinginstants.

Mathematically.

s0(t) = sm(t)δP (t) (8.28)

but,δp(t) = Σ∞r=−∞Tsδ(t− rTs) (8.29)

so

s0(t) =∞X

r=−∞

Tssm(rT )δ(t− rT ) (8.30)

on using Eq. (8.27)

s0(t) =∞X

k=−∞

sm(t)ej2πkt/Ts (8.31)

The Fourier transform of Eq. (8.31) is

s0(f ) =

∞Xk=−∞

sm(f −k

Ts

) (8.32)

Then, the spectrum of the sampled signal is the same as Sm(f ) translated to the right andleft 1/Ts, 2/T s, 3/Ts etc. In addition to Sm(f ) itself when k = 0. Typical spectrum ofSm(f) and S0(t) = Sm(T )δP (t) are shown in Fig. 8.15.

Note that 1/Ts ≥ 2fm, where fm is the maximum frequency component of Sm(f). It isessential and follows from the sampling theorem which states ”if the maximum frequencycomponent in Sm(t) is fm, Sm(t) can be sampled at any rate greater than or equal to 2fmand original signal can be recovered from its samples by filtering.”

Note that although a low-pass filter will restore the original signal, a bandpass filter centeredat f = 1/Ts, 2/T s, . . ., and 2fm wide will produce the original AM − SC . Then amplitudemodulation can be produced by pulse modulation and filtering as well as by using nonlineardevices.

Since impulses are impossible to produce, pulses of suitable width can be used for thispurpose. In general, it can be shown that, if the pulse width is ∆t, the Fourier transform of

145

Page 146: Linear Transforms

the pulse is equal to the area under the pulse for all frequencies less than 1/(4π∆t), providedthat the pulse is symmetric about its centre. Even if the condition of symmetry is not met,there is still a pulse length which the spectrum is very nearly a constant upto some maximumfrequency. For symmetric pulses, it is easier to prove:

Let p(t) be a pulse of length ∆t and p(t) be an even function of t.

Then

P (f ) =

Z ∆t/2

−∆t/2p(t)e−j2πftdt (8.33)

=

Z ∆t/2

−∆t/2p(t)cos(2πf t)dt− j

Z ∆t/2

−∆t/2p(t) sin(2πf t)dt

Since p(t) is even, therefore its transform is real and the second integral vanishes. Then

P (f) =

Z ∆t/2

−∆t/2p(t)cos 2πftdt

But for f < 1

4π∆t, 2πf < 1

2∆t, so even at the upper limit, cos 2πf t = cos 1

4= cos 14.3 = 0.969.

The integrand is very nearly equal to p(t) for all t within the range of integration. So, verynearly

P (f ) =

Z ∆t/2

−∆t/2p(t)dt = area of p(t)

for f < 14π∆t

Now, if the objective of sampling is to recover Sm (t) by ultimately using a low pass filter, itis only necessary that ∆t < 1/(4πfm), where fm is maximum frequency contained in Sm(f ).If the sampling is to be used to produce AM , the first repetition of the spectrum is the onethat is used, so the carrier frequency and the sampling frequency are the same. The pulsehas to be about 8 percent of the carrier period. With 360deg to a cycle, the pulse shouldnot be larger than 28.6 deg. wide.

The plate modulator operates on this principle and is shown in Fig. 8.16. The modulatingsignal is placed in series with the d-c supply of the circuit, so the plate voltage on the triodeis very nearly Ebb + Sm(t). The L− C resonant circuit provides a low impedance path forthe modulating signal and a high impedance to the carrier. The resonant circuit is also thefilter that removes all but those frequencies near the carrier frequency.

The triode is biased well below cutoff, so when no carrier signal is applied to its grid, thetriode does not conduct at all. The carrier is applied to the control grid with an amplitudethat insures that the tube will conduct only when the crest of the carrier is reached. If itconducts, for all intents and puposes, it can be considered an impulse as far as the bandof frequencies near fc is concerned. Further-more, the amount of current produced will beproportional to the plate voltage at the moment of conduction, so the pulse will have an areaunder it proportional to Ebb + sm(t). The current pulses produced then pass through thebandpass filter represented by the L−C circuit, and the output voltage will be proportionalto

[Ebb +Sm(t)] cos 2πfct

If the maximum amplitude of Sm(t) never exceeds Ebb , this is an ordinary AM signal. Theplate modulator has the added advantage that nearly 100 percent modulation can be achievedwith relatively little distortion.

146

Page 147: Linear Transforms
Page 148: Linear Transforms

Another practical application of pulse modulation is time multiplexing. If it is possible tosend a message by sending only the samples, is there not some use that can be made ofthe time in between the samples? Consider two signals Sm1(t) and Sm2(t). These mightbe two voice signal on a telephone line. Since 2.5KHZ is the highest frequency needed orused in voice transmission, put both the signals through 2.5KHZ low-pass filter and thensample each at 5KHZ rate. Now stagger the sampling pulses, so that those belonging toone message are alternated with those of the other message. Two such signals appear in Fig.8.17(a) and the alternating sample pulses appear in Fig. 8.17(b)

This pulse (note that its fundamental frequency is 10KHZ) and the pulses at the otherend of the line are separated out by some type synchronized switches device or commutator,then each is passed through a 2.5KHZ low pass filter, and both signals are thus recoveredsimultaneously. Observe that the spectrum of the pulses that carry the sampled informationmust be transmitted by this channel without distortion, so the channel bandwidth mustbe determined not by signals being modulated but by the pulses. To use this bandwidthefficiently, it is necessary to send more than two message simultaneously.

8.2. FILTERS

Filters are an essential part in the design of linear systems and are used to modify the signalor eliminate the unwanted frequency band. In Sec. 8.1, we have used filters in communicationsystems. Indeed any communication system involves filters. The Fourier transform is notused to design the filter, but rather to establish the design criteria for them. It will tellus what is possible and what is not possible, and explain some characteristics of practicalfilters.

A physically realizable filter is one whose impulse response is necessarily zero for t less thanzero. However, in frequency domain, it is not easy to specify criteria for physical realizability.For example, if h(t) is the response of a realizable filter, than h(t) = 0 for t < 0. Its transformH(f ) can be expressed as

H(f) = R(f) + jI(f) (8.34)

andH(f ) = |H(f )|ejθ(f) (8.35)

Where R(f) and I(f) are the real and imaginary parts respectively. Eq. (8.35) is usedin the filter theory. For example, a bandpass filter should have constant magnitude overthe passband and zero magnitude outside the band. In this case magnitude of the transferfunction has to be considered.

Suppose a band limited siganl in |f | < fc is put through a filter, whose magnitude is constantover this frequency range, but can be any thing at all other frequencies. Let the transformof this signal be X (f) and the magnitude of the transfer function be A. Then the transformof the output signal will be

X0(f ) = AX (f)ejθ(f) (8.36)

However, if the device is to produce the signal without distortion, then X0(f) should beproportional to X(f). This means that θ(f) = 0. But θ(f ) = −j2πft0 would not be

X0(f ) = AX(f)e−j2πft0 (8.37)

has an inverse transform, which is the input signal changed in amplitude by A and de-layed t0 sec. Assuming that time delay of t0 seconds is not objectionable, the criterion fordistrortionless filtering is a phase function that is linear with frequency.

148

Page 149: Linear Transforms
Page 150: Linear Transforms

The ideal bandpass filter then has the following properties:

1. The magnitude of the transfer function is constant in the passband and zero otherwise.

2. The phase function, θ(f) must be linear function of frequency in the passband with negativeslope of −2πt0 . As with all ideals, an ideal filter is unattainable. However, the ideal filtercan be approached arbitrarily closely if one is willing to increase the delay time t0.

The criterion that the amplitude of the transfer function must meet to insure that the impulseresponse will be zero for negative t is called the Paley-Weiner condition.

This states that the |H(f)| may be the magnitude of the Fourier transform of a functionwhich is zero for t less than some time t if and only if the integral

Z ∞

−∞

ln|H (f)|1 + f2

df (8.38)

converges that is, if it is less than infinity. If the integral converges, there exists a phase func-tion, not necessarily linear, that can be associated with |H(f)|, so that its inverse transformis zero for negative t.

8.2.1 The Ideal Low-Pass Filter

The transfer function of the ideal low pass filter is

H(f ) = A [u(f + fc) − u(f − fc)] e−j2πft0 (8.39)

where fc is the cut off frequency and t0 is the delay time.

The inverse transform of Eq. (8.39) is

h(t) =A

π

sin [2πfc(t − t0)]

t− t0(8.40)

The function is shown in Fig. 8.18 and it starts wiggling, before t = 0 and hence is notrealizable. The unit function response of this filter is

A

π

Z t

−∞

sin{2πfc(t− t0)}t− t0

dt = Aξ[2πfc(t− t0)] (8.41)

This function resembles a step function and is shown in Fig. 8.19. If fc is made large, thefrequency of wiggles is large. Notice, however that no matter how large fc is made, therewill always be an overshoot and an undershoot at the discontinuity. Even though the filterabove is not realizable, it will be shown that realizable filters demonstrate this overshoot ifthe magnitude of the transfer function falls rapidly in the vicinity of fc . This peculiarityis known as Gibbs phenomenon. Observe that rise time of the step response is very nearlyequal to the reciprocal of the slope of {2fcπ(t − t0)} at t = t0. Then the rise time of theoutput is π/a = 1/(2fc). If the input were a rectangle pulse instead of a step function,then output can be viewed as positive step followed by a negative step. If the output is tolook anything like a pulse, then the step function rise time should not exceed one-half thepulse length. This leads to the rule of thumb: the bandwidth of a filter must be at least thereciprocal of the pulse length if the pulse is not to be seriously altered in amplitude. If thedelay time t0 is large

150

Page 151: Linear Transforms
Page 152: Linear Transforms

sin{2πfc(t−t0)}t− t0

will be small for negative t. Therefore, we should be able to approximate the ideal filter bymaking the impulse response of a filter.

A

π

sin 2π{fc(t− t0)}t− t0

, t ≥ 0

Unfortunately this function is not symmetrical about the time t = t0 , so when this istransformed, the result will not have linear phase. Symmetry can be achieved however bychopping the response of ideal filter for t > 2t0 as well. The resulting response is shown inFig. 8.20.

The transform of this function can be approached in two ways. The impulse response abovecan be viewed as the original impulse response multiplied by {u(t) − u(t − 2t0)}. Thetransform of the resulting function is

sin 2πf t0πf

e−j2πft0

and this can be convolved with the transform of the ideal filter to find the transfer functionof almost ideal filter. If Hai(f ) is the transfer function of this realizable approximation tothe ideal filter, then

Haif =

Z fc

−fc

Ae−j2πyt0e−j2π(f−y)t0 sin 2π(f − y)t0π(f − y)

dy

=

Z fc

−fc

Asin 2π(f − y)t0π(f − y)

e−j2πft0dy

Since the exponential and the constant under the integral sign are independent of y, theymay be brought outside the integral sign. If the new variable x = f−y is used, with dx = dyand the limit changed to f + fc and f − fc

Haif =A

πe−j2πft0

Z f−fc

f+fc

− sin 2πt0x

xdx

=A

πe−j2πft0

Z f+fc

f−fc

sin 2πt0x

xdx (8.42)

Hai(f ) = A{ξ{2πt0(f + fc)} − x{(2πt0(f − fc))}}e−j2πft0 (8.43)

The magnitude and phase of Eq. (8.43) are plotted as a function of frequency in Fig. 8.21.Notice that since the function becomes negative for alternate intervals below −fc and abovefc , the phase function has discontinuities in it of magnitude π every time a change in signoccurs.This in no way detracts from the linearity requirement of the phase.

152

Page 153: Linear Transforms
Page 154: Linear Transforms

We can now draw two conclusions from the result:

1. A filter will be function of the slope of the frequency response at the cutoff frequency. Thelarger this slope, the larger will be the delay time. Further, the price for good low pass filterwould appear as a delay in response.

2. The response of such filter will be accompanied by about 9 percent overshoot and undershootat points where the input is discontinuity.

8.2.2 The High-Pass Filter

A filter with transfer function Ae−j2πft0 for all frequencies is physically realizable. Therefore,if we subtract the transfer function for the low pass filter from the function given, theresulting transfer function will be physically realizable. Thus

Hn(f ) = Ae−j2πft0{1− ξ{2πt0(f + fc) − ξ{2πt0(f − fc)}}will have an impulse response equal to an impulse value A at t = t0 , minus the impulseresponse of the low pass filter with the same cutoff frequency. This appears in Fig. 8.22(a).Its integral will be step response which will be upside down version of the low pass stepresponse, but with a positive discontinuity of A at t = t0 as shown in Fig. 8.22(b). Themagnitude and phase functions for this filter are shown in Fig. 8.23.

8.2.3 The Bandpass Filter

The bandpass filter may be thought of as the result of subtracting from a constant both ahigh pass and a low pass filter, the cutoff frequency of the low pass filter being less thanthe high pass filter. A bandpass filter can be visualised from a low pass filter by simpletranslation to the left and the right of f0Hz, where f0 is the centre frequency of the filter.This means that impulse response of the filter is the same as that of low pass filter multipliedby cos 2πf t0. In fact, this result can be made to apply approximately to any signal appliedto a band filter.

8.2.4 Practical Filters

The first filter designs were used in audio work, so there was little concern for linear phasefunctions. If a bandpass or high pass filter is to be designed using lumped elements R, L,and C, it is necessary only to design the equivalent low pass filter with a cut off frequencyof one rad/sec and one Ohm impedance level.

We proceed as follows.

Suppose we have been given a low pass filter constructed of R, L and C elements, and weknow its circuit diagram. What would happen if all the inductances and capacitances wereremoved and replaced by inductances and capacitances half as large?. The filter will havethe same characteristics, but the bandwidth will be doubled. Therefore, if L and C termsare reduced in size by a factor 0a0 then the transfer function of the network will have thesame amplitude and phase variation, but with the frequency axis multiplied by the constant0a0. If we are satisfied with the cutoff frequency, but dissatisfied with the impedance level,then the impedance level can be raised by a factor 0b0. This means that R and L terms aremultiplied by 0b0 but the C terms are divided by 0b0.

If, then, we design a low pass filter with cutoff frequency 1/(2π)Hz and a one ohm impedancelevel, then to convert it to low pass filter with cutoff frequency fc and impedance level R0

we simply

Multiply all resistances by R0

Multiply all inductances by R0/(2πfc)

154

Page 155: Linear Transforms

Divide all capacitances by 2πfcR0

If we wish to design a bandpass filter with impedance level R0 and bandpass fc then it isnecessary to design the corresponding low pass filter with the same impedance level and cutofffrequency fc and then place (1) in series with every inductance a capacitance that is in seriesresonant with it at the desired centre frequency f0 and (2) in parallel with every capacitanceof the low pass filter an inductance that is parallel resonant with it at the center frequencyf0. This means that the impedance of an inductance is replaced by a series resonant circuit,then jωL = j2πfL is replaced by j2πL− j/2πfc where C is related to L by

C =1

4π2f 20L

if it is resonant with it at f0. This amounts to replacing jωL = j2πfL by j2πL(f − f20 )). Itcan be shown in similar way that placing an L in parallel with each capacitance is equivalentto replacing the admittances j2πfC by j2πC(f − f20 /f) where inductances in each case arerelated to C by

L =1

4π2f20C

in order that each pair will be resonant at the frequency f0. Both of these operations canbe expressed mathematically by saying that the frequency f is replaced by

f − f 20f=(f − f0)(f + f0)

f

This is not quite equivalent to translation to the left or right for bandpass filters. Forfrequencies near f0 , however, the function (f − f0)(f + f0)/f behaves like 2(f − f0) and soif the cutoff frequency of the original lowpass filter is fc, then, the upper cutoff frequency ofthe bandpass filter will be about f0+fc/2 and the lower cutoff frequency near f0−fc/2. Thetransfer function has thus shrunk in size, but the resulting bandwidth is the same. Actually,the new cutoff frequencies are related to each other by

fH − fL = fc

and

fHfL = f20

so that the bandwidth is exactly fc , but the centre frequency is at the geometric mean ofthe upper and lower cutoff frequencies.

Finally, if it is required to design a high pass filter with cutoff frequency fc and impedencelevel R, then we need only design a low pass filter with the same level and cutoff frequencyand then replace all C terms by L terms that are resonant with the C terms at the cutofffrequency fc and replace all L terms by C terms that are resonant with L terms at cutofffrequency. The phase angle will change sign however. On either side of the frequency fc thevariation of the impedance of the new elements with frequency will be just the opposite ofthose they replaced and so opposite transfer function will be obtained. It is as though thefrequency f were replaced by f 2c /f. Since high pass and bandpass filters can be obtainedfrom low pass filters, we will consider only the design of low pass filters and compare withthe ideal filter.

155

Page 156: Linear Transforms
Page 157: Linear Transforms

8.2.5 Butterworth Filters

The low pass Butterworth filter of order n has a transfer function with a magnitude of

|HBn(jω)| =1

(1 + ω2n)1/2(8.44)

It is seen that the magnitude of the transfer function is 1/√2 at ω = 1 rad per sec, and this

is called its cutoff frequency. It is also well known that

|HBn(jω)|2= HBn(jω)H

∗Bn(jω) =

1

1 + ω2n(8.45)

But the conjugate of HBn(jω) is HBn(−jω). Eq. (8.45) can now be written as

HBn(jω)HBn(−jω) =1

ω2n(8.46)

If we now go backwards and put s = jω or ω = −js, Eq. (8.46) will read

HBn(s)HBn(−s) =1

1 + (−js)2n =1

1 + (−1)ns2n (8.47)

Now if HBn(s) is the transfer function of a realizable filter, then all its poles must be in theleft half plane. Then HBn(−s) must have all its poles in the right half plane. It is thennecessary only to factor the denominator of Eq. (8.47) and keep the left half s-plane poles,and throw the others away. The roots of the denominator that are in the left half s-plane arethe (2n) the roots of −1 or +1, depending on whether n is even or odd. Thus the roots lieon the unit circle in the s-plane, and it can be shown that those lying in the left half planeare

Sk ≡ sin

½π(1 + 2k)

2n

¾+ j cos

½π(1 + 2k)

2n

¾(8.48)

for k = 0, 1, 2, . . . , n − 1

If these roots are put in the appropriate factors, then

HBn(s) =1

(s− s0)(s− s1) . . . (s− sn)(8.49)

Example 8.3

Find the transfer function and step response of a third order butterworth filter.

Solution: Since n = 3, the roots of HB3(s) are

157

Page 158: Linear Transforms

s0 = − sinπ

6+ j cos

π

6

= − 12+ j

√3

2

s1 = − sin3π

6+ j cos

6= − 1

s2 = − sin5π

6+ j cos

6

= − 1/2 − j

√3

2

Then the denominator polynomial is

"s+

1

2− j

√3

2

#"s +

1

2+ j

√3

2

#(s + 1) = (s2 + s + 1)(s+1)

and

HB3(s) =1

s3 + 2s2 +2s + 1

The unit step response will be the inverse Laplace transform of HB3(s)/s; that is

1

s(s +1)hs + 1

2− j

√3

2

i hs+ 1

2+ j

√3

2

i=1

s− 1

s+1+

1√36 90o

s+ 1

2− j

√3

2

+ . . .

plus the conjugate of the last term. This makes the response

1 − e−t +2√3e−t/2 cos(

√3t

2+ 900)

= 1− e−t +2√3e−t/2 sin

√3

2t

This response is shown in Fig. 8.24 along with the response of the ideal filter. The lowfrequency group delay of the Butterworth filter can be calculated by noting that s = jω, thetransfer function is

HB3(jω) =1

(1 + jω)n1

2+ j(ω −

√3

2)on

1

2+ j(ω +

√3

2)o

for which angle is

158

Page 159: Linear Transforms

θ(f) = − tan−1ω − tan−1 2(ω −√3

2)− tan−1 2(ω +

√3

2)

The negative derivative of this with respect to ω will yield the group delay tg. That is

tg = −dθ(f )

dω=

1

1 + ω2+

2

1 + 4hω −

√3

2

i2 + 2

1 + 4hω+

√3

2

i2At ω = 0, this is 2 sec. and so the response of the ideal filter is drawn with this delay.The overshoot of the Butterworth filter is 8 percent, but there is no undershoot. Since thetransfer function falls off like 1/f 3 for large frequencies, this is a factor of 23 per octave or18db per octave.

Example 8.4

1. Show that the transfer function V0(s)/Is(s) for the circuit shown in Fig. 8.25 is a Butterworththird order filter. Plot the magnitude and phase of its transfer function.

2. Use the filter given to design a low pass filter with an impedance level of 10K and cutofffrequency of 15KHz.

3. Use circuit of (2) to design a bandpass filter with a bandwidth of 15KHz but centered at100 KHz with a 10k impedence level.

4. Use circuit of (2) to design a high pass filter with a cutoff frequency of 15KHz and animpedence level of 10k.

Solution:

1. By using circuit analysis, it can be shown that

Is(s)

V0(s)=1 + s/2 + 3s [1 + 4s(1 + s/2)/3] /2

1

V0(s)

Is(s)=

1

s3 +2s2 + 2s+ 1

which is the HBN (s)

The magnitude and phase functions are

|HB3(jω)| =1√1 + ω6

and

θ(f) = − tan−1 ω+ tan−1(ω−√3

2+ tan−1 2(ω +

√3

2)

Since

HB3(s) =1

(s+ 1)(s2 + s+ 1)

159

Page 160: Linear Transforms
Page 161: Linear Transforms

Then

HB3(jω) =1

(1 + jω)(1 − ω2 + jω)

and θ(f ) could be written as

θ(f) = − tan−1ω + tan−1ω

(1− ω2)

The magnitude and phase functions are shown in Fig. 8.26

2. Since the level is to be raised to 10k = 104 multiply R and L by 104 , divide C by 104 . Thefrequency level is to go to 15.103Hz from 1

R = 1 to R = 10

L =4

3to L =

4.104

3.2π.15.103=4

9πh

C =1

2to C =

1/2

104.2π.15.103=10−8

6π=10−2

6πµf

and c = 3

2goes to three times the latter, or C = 10−2

2πµf .

The circuit diagram is shown in Fig. 8.27.

3. For a bandpass filter at f0 = 105 ; in series with 4

9πh, put a C such that

C =1

4π2f 20L=900

16πµf

In Parallel with the 10−2/(6π)µf capacitance put an

L =1

4π2f20=

3

200πh

Finally, put an inductance of 1/3 of last value in parallel with the 10−2/(2π)µf capacitanceand obtain the circuit in Fig. 8.28.

4. For the highpass filter with cutoff at 15kHz replace L and C terms by elements resonantwith them at 15kHz. L = 4

πh is replaced by

C =1

4π2f20L

=10−2

4πµf

C = 10−2

(2π)µf is replaced by L = 2

(9π)h, and the 10−2

(6π)µf capacitance becomes L = 2

3πh. The

circuit is shown is in Fig. 8.29.

8.2.6 Chebyshev Filters

The Chebyshev polynomials are defined by

Cn(x) = cos(n cos−1 x) (8.50)

161

Page 162: Linear Transforms
Page 163: Linear Transforms
Page 164: Linear Transforms

It can be shown that these polynomials satisfy the recurrence formula

Cn = 2xCn−1(x) − Cn−2(x) (8.51)

and so if the first two can be obtained then, the others also can.

Letting n = 0 in Eq. (8.50)

C0(x) = cos(0) = 1 (8.52)

Letting n = 1 gives

C1(x) = cos(cos−1 x) = x (8.53)

Now C2 can be found by Eq. (8.51)

C2(x) = 2x(x) − 1 = 2x2 − 1 (8.54)

andC3(x) = 2x(2x

2 − 1)− x = 4x3 − 3x (8.55)

These polynomials are useful because in the interval −1 ≤ x ≤ 1 the polynomials oscillateback and forth from +1 to −1, and are always equal to +1 at x = 1 and ±1 at x = −1. Thepolynomials are odd if n is odd and even if n is even. The nth-order Chebyshev filter hasthe general form

|Hcn(jω)| =1p

1 + 2C 2n(ω)

(8.56)

Where is commonly chosen to be less than or equal to one. A device where = 1 leads to a3 db variation in the transfer function in the passband, and this is usually considered large.

As is in the case of Butterworth filters the magnitude squared is formed, with ω set equalto −js, then the roots of the resulting denominator that lie in the left half s-plane can beshown to be at

Sk =(a− 1)/a

2sin

½π(1 + 2k)

2n

¾+ j

a+1

a

2cos

½π(1 + 2k)

2n

¾(8.57)

where k = 0, 1, . . . , n − 1 and

a =

(∙1 +

12

¸1/2+1)1/n

(8.58)

These poles lie on an ellipse whose semi-major axis lies on the j axis and whose length is(a +1/a)/2 and whose semi minor axis lies on the real axis is (a− 1/a)/2 in length.

164

Page 165: Linear Transforms

Example 8.5

Choose = 1/2 and determine the magnitude phase and step response of a third orderChebyshev low-pass filter.

Solution: Since C3(ω) = 4ω3 − 3ω, then C2

3 (ω)/4 is 4ω6 − 6ω4 +9ω2/4, and so the transfer

function is

|Hc3(jω)| =1

(1 + 9ω2/4 − 6ω4 + 4ω6)1/2

with = 1/2. Then from Eq. (8.58)

a = {√5 + 2}1/3 =

√5 + 1

2

making a− 1/a = 1 and a +1/a =√5. Then the roots in the left half plane are at

s0 = −1

2sin

π

6+ j

√5

2cos

π

6

= − 14+ j

√15

4

s1 = −1

2sin

π

2= −1

2

s2 = s∗0 = −1

4− j

√15

4

Then the denominator of the transfer function is

(s+1

2)(s+

1

4− j

√15

4)(s +

1

4+ j

√15

4)

= (s +1

2)(s2 +

s

2+ 1) = s3 + s2 +

5s

4+1

2

Then H(s) has to be

H(s) =2

2s3 + 2s2 + 5s4 + 1

where the denominator polynomial had to be multiplied by two to make the constant termunity.

Since θ(f) is the negative of the angle of the denominator polynomial, then

θ(f ) = −½tan−1 2ω + tan−1

ω

2(1− ω2)

¾If θ(f ) is differentiated with respect to ω and ω set equal to zero., then the delay time for lowfrequencies can be shown to be 2.5 sec. Fig. 8.30 shows the magnitude and phase functionand Fig. 8.31 shows the step response of the ideal filter having 2.5 sec. delay.

165

Page 166: Linear Transforms
Page 167: Linear Transforms

PROBLEMS

8.1 Let x(t) = cos 220πt+cos 440πt. Multiply this signal by cos(2πx106t) to produce AM −SC ,then multiply by cos{106 − 110)t} to detect the AM − SC wave. Assuming that all but theaudio frequencies are filtered out in the last step, what is the output signal? This will indicatewhat happens when music is received by an AM − SC receiver with drifting oscillator.

8.2 Suppose that in Prob. 8.1 only the lower sideband is retained when x(t) = cos 220πt +cos 440πt is multiplied by cos(2πx106t). Thus only these frequencies whose magnitude areless than 1MHz are retained. What will the output be now when signal is multiplied bycos{2π(106 − 110)t} and the frequencies above audio range are filtered out.

8.3 The signal Xm(t) in Fig. 8.32 is phase modulated and the output is

x(t) = Acos{6.73x107t+ bxm(t)}

(a) What is the smallest bandwidth the phase modulated signal can have assuming that b, butnothing else can be altered at will {b is not equal to 0 of course } and xm(t) is 1KHz sinusoid.

(b) If Xm(t) = sin377t and b = 1000, what is the maximum instantaneous frequency deviation?What is the carrier frequency? What is the bandwidth occupied by the signal?

8.4 The concerned regulatory agency has decreed that the maximum frequency deviation forFM stations will be 75KHz.

a) If the maximum and minimum modulation frequencies it is desired to transmit are 15KHzand 20Hz. then what range will β have?

b) What bandwidth will this require?

c) If an AM − SSB with vestigial carrier were used instead of FM for the same signals asin (a) what bandwidth will be required? What is the ratio of the FM bandwidth to theAM − SSB bandwidth?

8.5 For the low-pass to bandpass conversion let us suppose that the lowpass filter has a cutofffrequency at fc. Since the alteration of the circuit is equivalent to replacing f by f − f 20 /fthe behaviour of the new circuit at f0 will be same as that of the old at f = 0, and thebehaviour of the new circuit at the frequency f given by

f − f 20f

± fc

must be the same as the behaviour of the old at fc = ±f. Let fH be the positive solution of

f − f20f

= fc

and fL the positive solution

f − f20f

= −fc

a) Find fH and fL in terms of f0 and fc

b) Show that fH − fL = fc

167

Page 168: Linear Transforms

c) Show that fH − fL = f20

8.6 The constant k low-pass filter is shown in Fig. 8.33. This has a nominal impedence level ofone ohm and a cutoff frequency of 1/(2π)Hz

a) Design a high pass constant k filter with impedence level 5K and cutoff frequency of 30Hz

b) Design a bandpass filter with bandwidth 10KHz, centre frequency 80KHz and 10K impe-dence level.

8.7 If the constant k filter of Fig. 8.34 is terminated in one Ohm and driven by a source witha one Ohm internal impedence, as shown in Fig. 8.34 with R = 1, then find the transferfunction of the filter. Show in particular that with this resistance level the filter is a thirdorder Butterworth.

8.8 Repeat Problem 8.9 but this time R = 2 in Fig. 8.34 and show that this is now a third orderChebyshev filter with = 1/2

8.9 Show that the circuit of Fig. 8.35 is a second order Butterworth filter. Find and Plotcarefully its response. Does it have overshoot?

8.10 Show that the circuit appearing in Fig. 8.36 is Chebyshev second order filter with = 3/4,Find its step response. Does it have overshoot?

168

Page 169: Linear Transforms
Page 170: Linear Transforms
Page 171: Linear Transforms

CHAPTER - IX

Z-TRANSFORM

9.1 INTRODUCTION

Digital signal processing has become an established method of dealing with electrical wave-forms, and the associated theory of discrete time systems can often be employed in a numberof science and technology disciplines. Typical applications of this technique are analysis ofbiomedical signals, vibration analysis, picture processing, analysis of seismic signals, speechanalysis and sampled data control systems. The signals in sampled data system may be ofthe form of a periodic or an aperiodic pulse train with no information transmitted betweentwo consecutive pulses. This train of pulses may be natural or man made through somesampling process.

A simple but adequate model of the sampling process is one which considers a continuousinput signal, x(t), to be sampled by a switch closing periodically for a short time, τ seconds,with a sampling interval T seconds (Fig. 9.1). Referring to Fig. 9.1, it is seen that theswitch output is a train of finite width pulses. However, if the pulse width, τ , is negligiblecompared with the interval between successive samples, T , the output of the sampler can beconsidered to be a train of impulses with their height proportional to x(t) at the samplinginstant (Fig. 9.2)

The ideal sampling function δT (t) represents a train of unit impulses, and is defined as

δT (t) =

∞Xn=−∞

δ(t− nT ) (9.1)

where δ(t) is the unit impulse function occurring at t = 0, and δ(t−nT) is a delayed impulsefunction occurring at t = nT .

Thereforex∗(t) = x(t).δT (t) (9.2)

The value of x(t) is needed only at t = nT and furthermore for a physical system x(t) = 0for t < 0, therefore

x∗(t) =

∞Xn=0

x(nT )δ(t − nT) (9.3)

Thus we see that x∗(t) is a weighted sum of shifted unit impulses.

Taking the Laplace transform of x∗(t) directly from Eq. (9.3)

X ∗(s) = L[x∗(t)] =∞Xn=0

x(nT )L [δ(t − nT )] (9.4)

Since the Laplace transform of the unit impulse δ(t− nT ) is e−nTs , Eq. (9.4) becomes

X∗(s) = Σ∞n=0x(nT)e−nTs (9.5)

171

Page 172: Linear Transforms
Page 173: Linear Transforms

We can also expand Eq. (9.1) as Fourier series, that is

δT (t) =

∞Xn=−∞

Cnejnωst

where

Cn =1

T

Z T

0

δT (t)e−jnωstdt

and ωs is the sampling frequency equal to 2π/T rad/sec. Since the area of an impulse isunity, then Z T

0

δt(t)e−jnωstdt = 1

and therefore Cn =1

T, hence

δT (t) =1

T

∞Xn=−∞

ejnωst

We have seen in Fig. 9.2 that for the impulse modulator, x∗(t) = δT (t)x(t), therefore

x∗(t) =1

T

∞Xn=−∞

x(t)ejnωst (9.6)

Taking Laplace transform and using the associated shifting theorem, we obtain

X∗(s) = L[x∗(t)] = 1

T

∞Xn=−∞

X(s− jnωs)

therefore

X ∗(jω) =1

T

∞Xn=−∞

X[j(ω − nωs)] (9.7)

Thus we see from Eq. (9.4) that as a result of impulse sampling the frequency spectrumof x(t) is repeated infinitum at intervals of jw. Let us now consider the frequency spectraof X∗(t). Referring to Fig. 9.3, if ωs/2 is greater than the highest frequency component ofx(t) (Fig. 9.3a), then the original signal can theoretically be recovered from the spectra ofx ∗ (t) (Fig. 9.3b). In contrast if ωs/2 is not greater than the highest frequency componentin the continuous signal (Fig. 9.3c), then the folding of frequency response function occurs,and consequently the original signal cannot be reclaimed from the sampled data signal. Theerrors caused by the folding of the frequency spectra are generally referred to as aliasingerrors, which may be avoided by increasing the sampling frequency.

It has been established that the sampled data signal has infinite number of complementaryfrequency spectra, which means that there must be an infinite number of associated polezero patterns in its s-plane representation. Consequently, the analysis of any sampled datasignal or system is extremely difficult when working in the s-plane. However, fortunately, itis possible to use Z-transfrom instead, which gives a good mathematical description.

9.2 THE Z-TRANSFORM

The z-transform is simply a rule that converts a sequence of numbers into a function of thecomplex variable z, and it has properties that enable linear difference equations to be solvedusing straight forward algebraic manipulations.

Suppose that we letz = eST = e(σ+jω)T ,

then |z|= eσT and 6 z = ωT , so that any point sx in the s-plane transforms to a correspondingpoint zx in the z-plane as shown in Fig. 9.4.

173

Page 174: Linear Transforms
Page 175: Linear Transforms

Referring to Table 1.1, it is seen that the imaginary axis in the s-plane transform to thecircumference of the unit circle in z-plane. When

Pis negative |z| < 1 and when

Pis

positive |z| > 1. Hence a strip ωs wide in the left hand half of the s-plane transforms to thearea inside the unit circle in the z-plane (Fig. 9.5).

Table 9.1

σ = 0, ωs = 2π/Tjω z = 1 6 ωT0 1 6 0ωs8

1 6 45oωs4

1 6 90o3ωs8

1 6 135oωs2

1 6 180o5ωs8

1 6 225o3ωs4

1 6 270o7ωs8

1 6 315o

ωs 1 6 360o

The most important effect of z-transformation is that since the poles and zeros of x∗(t) arespaced at intervals of ωs = 2π/T rad/sec in the jω direction, all sets of poles and zeros inthe s-plane transform to a single set poles and zeros in the z-plane.

Let us consider Eq. (9.5)

X∗(s) =

∞Xn=0

x(nT )e−nTs

Since z = esT , the above equation can be written as

X(z) =

∞Xn=0

x(nT)z−n (9.8)

In general,

any continuous function, which possesses Laplace transform, also has a z-transform for thesampled function.

Example 9.1 Let x(t) = e−at, find X (z) for sampling period T .

Solution:x(nT ) = e−anT (9.9)

From Eq. (9.5)

X∗(s) =

∞Xn=0

e−ante−nTs (9.10)

=1

1− e−(s + a)T, |e−(s + a)T | < 1

substituting z = esT ,

[X∗(s)]z = esT =1

1− e−aTz−1(9.11)

X (z) =z

z − e−aT, |z| > e−aT (9.12)

175

Page 176: Linear Transforms
Page 177: Linear Transforms

Example 9.2

Suppose that input signal of a digital filter is x(t) = sin ωt what is the z-transform of x∗(t)?

Solution:

x∗(nT ) = sin ωnT = (ejnωT − e−jnωT )/j2, therefore from Eq. (9.5) and Eq. (9.8).

X(z) =

∞Xn=0

∙ejnωT − ejnωT

j2

¸Z−n (9.13)

= (1/2j)

"∞Xn=0

(ejnωT )z−n−∞Xn=0

(e−jnωT )z−n

#(9.14)

now∞Xn=0

(ejnωT )z−n =z

z − ejωT, f or |z| > 1 (9.15)

similarly∞Xn=0

(e−jnωT )z−n =z

z − e−jωT, for |z| > 1 (9.16)

therefore

X(z) =z

j2

∙1

z − ejωT1

z − e−jωT

¸, |z| > 1

=z

j2

∙ejωT − e−jωT

z2 − (ejωT + e−jωT )z +1

¸X (z) =

z sinωT

z2 − 2z cos ωT +1 (9.17)

Example 9.3

Suppose that the transfer function of a system is

X(s) =1

(s+ a)(s + b)(9.18)

Find the corresponding z-transform.

Solution:

X(s) =1

(s+ a)(s + b)

Using partial fraction method.

X(s) =1

b− a.1

s+ a+

1

a− b.1

s+ b

X(s) =1

a− b

∙− 1

s+ a+

1

s + b

¸(9.19)

177

Page 178: Linear Transforms

Now from Table 9.2

X(z) =z

a− b

∙− 1

z− e−aT+

1

z − e−bT

¸=

z(e−bT − e−aT )/a − b

(z − e−at(z − e−bT )(9.20)

Table 9.2

Table of Z - Transforms

Laplace Transform Time Function Z-Transform

1 unit impulse δ(t) 11

sunit stepu(t)

z

z − 11

1− e−T sδT (t) =

P∞n=0 δ(t− nT )

z

z − 11

s2t

Tz

(z − 1)21

s3t2

2

T 2z(z +1)

2(z − 1)31

sn+1tn

n!lims→0

(−n)nn!

∂n

∂an(

z

z − e−aT)

1

s+ ae−at

z

z − e−aT

1

(s + a)2te−at

Tze−aT

(z − e−aT )2

a

s(s+ a)1 − e−aT

(1− e−aT )z

(z − 1)(z − e−aT )

ω

s2 + ω2sin ωt

z sinωT

z2 − 2z cos ωT + 1ω

(s + a)2 + ω2e−aT sinωt

ze−aT sin ωT

ze−2aT − 2zω−aT cos ωT +1s

s2 + ω2cos ωt

z(z− cos ωT )

z2 − 2z cos ωt+ 1s + a

(s + a)2 + ω2e−at cos ωt

z2 − ze−aT cos ωT

z2 − 2ze−aT cos ωt+ e−2at

178

Page 179: Linear Transforms

9.3 THE INVERSE Z-TRANSFORM

Just as in the Laplace transform method, it is often desirable to obtain the time domainresponse from the z-transform. This can be accomplished by one of the following methods:

1. The z-transform is manipulated into partial fraction expression and the z-transform table isused to find the corresponding time function.

2. The z-transform signal X(z) is expanded into power series in powers of z−1 . The coefficientof z−n corresponds to the value of time function x(t) at the nth sampling instant.

3. The time function x(t) may be obtained from X (z) by the inversion integral. The value ofx(t) at the sampling instant t = nT can be obtained by the following formula:

x(nT ) =1

2πj

X (z)zn−1dz (9.21)

where Γ is a circle of radius z = ecT centered at the origin in the z-plane, and c is of such avalue that all the poles of X(z) are enclosed by the circle, i.e., in the region of convergenceof X(z). It may be emphasized that only the value of x(t) at the sampling instants can beobtained from X(z), since X(z) does not contain any information on x(t) between samplinginstants.

Example 9.4 Given the z-transform

X(z) =(1 − e−aT )z

(z − 1)(z − e−at)(9.22)

find the inverse z-transform x∗(t).

1. Partial Fraction Expansion Method

Equation (9.22) may be written as

X(z) =z

z − 1 −z

z − e−aT(9.23)

From the z-transform table (Table 9.2), the corresponding time function at the samplinginstant is

x(nT ) = 1− e−anT (9.24)

hence

x∗(t) =

∞Xn=0

x(nT )δ(t − nT )

=

∞Xn=0

(1− e−anT )δ(t− nT ) (9.25)

2. Power Series Expansion

Expanding X(z) into a power series in z−1 by long division.

X(z) = (1− e−aT )z−1 + (1− e−2aT )z−2 + (1− e−3aT )z−3 + . . . + (1− e−naT )z−n + . . . (9.26)

179

Page 180: Linear Transforms

Correspondingly

X ∗(t) = 0xδ(t) + (1 − e−aT )δ(t− T ) + (1− e−2aT )δ(t− 2T ) + . . . + (1− e−ant)δ(t −nT ) + . . .

=

∞Xn=0

(1− e−anT )δ(t− nT) (9.27)

3. Real Inversion Integral Method

From Eq. (9.21) we have

x(nT) =1

2πj

X(z)zn−1dz =X

Residue of X(z) zn−1

at poles of X (z)

(1− e−at)zn

z − e−at

¯z=1

+(1− e−at)zn

z − 1

¯z=e−at

= 1− e−anT (9.28)

9.4 SOME IMPORTANT THEOREMS OF Z- TRANSFORMS.

1. Linearity of the z-Transform

For all constants C1 and C2 , the following property holds:

Z(C1f1 + C2f2) =

∞Xn=0

[C1f1(nT ) +C2f2(nT )] z−n

= C1

∞Xn=0

f1(nT )z−n +C2

∞Xn=0

f2(nT )z−n

= C1Z(f1) + C2Z(f2) (9.29)

The region of convergence is at least the intersection of regions of convergence of z[f1] andz[f2].

Thus Z is a linear operator on the space of all z-transformable functions f (nT ) for n =0, 1, 2, . . .

2. Shifting Theorem (Real Translation)

If Z[f] = F (z) thenZ [f(t ± nT )] = z±n[F (z)] (9.30)

where n is an integer

180

Page 181: Linear Transforms

Proof: By definition

z[f(t± nT )] =

∞Xn=0

f (kT ± nT )z−k

=

∞Xk=0

f (kT ± nT )z−(k±n).z±n

= z±n∞Xk=0

f(kT ± nT )z−(k±n)

= z±nF (z) (9.31)

This Theorem is very useful in the solution of difference equations. Following a similarprocedure, we can easily obtain the z-transform of the forward difference as well as thebackward differences.

3. Complex Translation

Z£e±aTf(t)

¤= [F (s± a)] = F

£ze±aT

¤(9.32)

Proof: By definition

Z£e±aT f(t)

¤=

∞Xn=0

f (nT )e±anT z−n (9.33)

If we let z1 = ze±aT , Eq. (9.32) becomes

Z£e±aT

¤=

∞Xn=0

e(nT )z−n1 = F (z1) (9.34)

hence,Z£e±aTf (t)

¤= F (ze±aT ) (9.35)

Example 9.5 Apply the complex translation theorem to find the z-transform of te−at

Solution:

If we let f (t) = t, then

F (z) = Z [t] = Tz

(z − 1)2 (9.36)

From Theorem 3

Z [te−at] = F (ze−at) =T (z−at)

(ze−aT − 12)

=Tze−aT

(z − e−aT )2(9.37)

181

Page 182: Linear Transforms

4. Initial Value Theorem

If the function f(t) has the z-transform F (z), and limit of F (z) exists, then

limt→0

f ∗(t) = limz→∞

F (z) (9.38)

5. Final Value Theorem

If the function f(t) has the z-transform F (z), and (1− z−1)F (z) has no poles on or outsidethe unit circle centered at the origin in the z-plane, then

limt→∞

f∗(t) = limz→1(1− z−1)F (z) (9.39)

Example 9.6 Given

F(z) =0.792z2

(z − 1)(z2 − 0.416z + 0.208) (9.40)

determine the initial and final value F (z).

Initial value of F (z): From theorem 4

limt→0

f∗(t) = limz→∞

F (z)

= limz→∞

0.792z2

(z − 1)(z2 − 0.0416z+ 0.208 (9.41)

= 0

Therefore, the initial value of f∗(t) is zero.

Final value of F(z): From Theorem 5

limt→∞

f ∗(t) = limz→1(1− z−1)F (z)

= limz→1(z − 1z

)0.792z

(z2 − 0.0416z +0.208) (9.42)

= limz→1

0.792z

z2 − 0.416 + 0.208 = 1

Therefore, the final value of f∗(t) is unity.

6. Real Convolution Theorem

If f1(t) and f2(t) have the z-transform F1(z) and F2(z) then,

F1(z)F2(z) = Z"∞Xk=0

f1(kT )f2(n− k)T

#(9.43)

Proof: By definition

F1(z)F2(z) =∞Xk=0

f1(kT)z−kF2(z) (9.44)

But we know thatz−kF2(z) = Z [f2(t− kT)] (9.45)

182

Page 183: Linear Transforms

Hence

F1(z)F2(z) =∞Xk=0

f1(kT )Z[f2(t − kT)] (9.46)

=

∞Xk=0

f1(kT )

∞Xn=0

f2 [(n− k)T ] z−n

=

∞Xn=0

"∞Xk=0

f1(kT )f2((n− k)T )

#z−n (9.47)

7. Complex Differentiation (Multiplication by t)

If F (z) is the z-transform of f , then

Z[tf ] = −Tz d

dzF (z) (9.48)

Proof: By definition

Z [tf] =∞Xn=0

(nT )f (nT )z−n

= − Tz

∞Xn=0

f (nT )(−nz−n−1) (9.49)

The term in the bracket is a derivative with respect to z

Z [tf ] = − Tz

∞Xn=0

f (nT )d

dzz−n

= − Tzd

dz

∞Xn=0

f (nT )z−n

= − Tzd

dzF (z) (9.50)

8. Differentiation with respect to second independent variable

Z [ ∂∂a

f(t, a)] =∂

∂aF (z, a) (9.51)

9. Second Independent variable limit value

Z [ lima→a0

f(t, a)] = lima→a0

F (z, a) (9.52)

183

Page 184: Linear Transforms

10. Integration with respect to second independent variable

Z∙Z a

a0

f(t, a)da

¸=

Z a

a0

F (z, a)da (9.53)

if the integral is finite.

9.5 THE PULSE TRANSFER FUNCTION

The transfer function of the open-loop system in Fig. 9.6a is given as

G(s) =C(s)

X(s)(9.54)

For a system with sampled-data, Fig. 9.6b illustrates a network G which is connected to asampler S with sampling period T.

Assume that S1 is an ideal sampler so that x∗(t) =

Pnx(nt)δ(t− nt)

If a fictitious sampler S2 with the same sampling period T as that of S1 is placed at theoutput, the output of the switch S2 to a unit-impulse input is

c∗(t) = g∗(t) =

∞Xn=0

c(nT )δ(t− nT ) (9.55)

where c(nT ) = g(nT ) is defined as the ”weighting sequence” of G.

The signals x(t), x∗(t), c(t), c∗(t) are illustrated in Fig. 9.7.

G∗(s) =

∞Xn=0

g(nT )e−nT s (9.56)

which is the pulse transfer function of system G.

Once the weighing sequence of a network G is defined, the output c(t) and c∗(t) of the systemis obtained by means of the principle of superposition. Suppose that an arbitrary functionx(t) is applied to the system of Fig. 9.6b at t = 0, the sampled input to G is the sequencex(nT ). At the time t = nT , the output sample c(nT ) is the sum of the effects of all samplesx(nT ), x(n− 1)T, x(n− 2)T, . . . , x(0), ; that is

c(nT) =X

effects of all samples x(nT ), x(n− 1)T . . . , x(0) (9.57)

or

c(nT ) = x(0)g(nT )+x(T )g[(n− 1)T ]+x(2T )g[(n−2)T ]+ . . .+x[(n−1)T ]g(T )+x(nT )g(0)(9.58)

Multiplying both sides of the last equation by e−nTs and taking the summation for n = 0 ton =∞, we have

∞Xn=0

c(nT )e−nT s =

∞Xn=0

x(0)g(nT )e−nT s +

∞Xn=0

x(T )g(n − 1)Te−nT s

+ . . . +

∞Xn=0

x [(n− 1)T ] g(T )e−nTs +∞Xn=0

x(nT )g(0)e−nTs

+

∞Xn=0

x(nT )g(0)e−nT s (9.59)

184

Page 185: Linear Transforms
Page 186: Linear Transforms

or∞Xn=0

c(nT )e−nTs =£x(0) + x(T )e−Ts + x(2T )e−2T s + . . .

¤∞Xn=0

g(nT )e−nTs (9.60)

from which∞Xn=0

c(nT )e−nTs =

∞Xn=0

x(nT)e−nTs∞Xn=0

g(nT )e−nT s (9.61)

or simplyC∗(s) = X∗(s)G∗(s) (9.62)

where G∗(s) is defined as the pulsed transfer function of G and is given by Eq. (9.56).

Taking the z-transform of both sides of Eq. (9.62) yields

C(z) =X (z)G(z) (9.63)

9.6 Z-TRANSFORM OF SYSTEMS

1. Z-Transform of Cascaded Elements with Sampling Switches between them

Fig. 9.8 a illustrates a sampled data system with cascaded elements G1 and G2. The twoelements are separated by a second sampling switch S which is synchronized to S1. Thez-transform relation between the output and the input signals is derived as follows.

The output signal of G1 isD(s) = G1(s)X(s) (9.64)

and the system output isC(s) = G2(s)D

∗(s) (9.65)

Taking the pulsed transform of Eq. (9.64) yields

D∗(s) = G∗(s)X∗(s) (9.66)

and substituting D∗(s) in Eq. (9.65), we have

C(s) = G2(s)G∗1(s)X

∗(s) (9.67)

Taking the pulsed transform of the last equation, we have,

C∗(s) = G∗2(s)G∗1(s)X

∗(s) (9.68)

The z-transform of the above equation is

C(z) = G1(z)G2(z)X(z) (9.69)

2. Z-Transform of cascaded elements with No sampling switch between them

Fig. 9.8 b illustrates a sampled data system with two cascaded elements with no samplerbetween them. The z-transform relation of output and input is derived as follows: Thetransform of the continuous output is

C(s) = G1(s)G2(s)X∗(s) (9.70)

186

Page 187: Linear Transforms
Page 188: Linear Transforms

The pulsed transform of the output is

C∗(s) = G1G∗2(s)X

∗(s) (9.71)

where

G1G∗2(s) = [G1(s)G2(s)]

∗=1

T

∞Xn=−∞

G1(s+ jnωs)G2(s + jnωs) (9.72)

In general,G1G

∗2(s) 6= G∗1(s)G

∗2(s) (9.73)

The z-transform of the Eq. (9.71) is

C(z) = G1G2(z)X(z) (9.74)

Example 9.7

For the sampled data system in Fig. 9.8 a and b, if G1(s) = 1/s, G(s) = a/(s+ a), and x(t)is a unit step function. Find C(z) in both the cases.

Solution: The output of the system in case ’a’ is

C(z) = G1(z)G2(z)X(z)

=z

z − 1 ×az

z − e−aT× z

z − 1

=az3

(z− 1)2(z − e−aT )(9.75)

The output in case 0b0 is

C(z) = G1G2(z)X(z)

=

∙a

s(s + a)

¸X(z)

=z(1− e−aT )

(z − 1)(z − e−aT )× z

z − 1

=z2(1− e−aT )

(z − 1)2(z − e−aT )(9.76)

3. General Closed Loop Systems

The transfer function of a closed loop sampled data system can also be obtained by theprocedure in the last sections. For the system shown in Fig. 9.9 the output transform is

C(s) = G(s)E∗(s) (9.77)

The Laplace Transform of continuous error function is

E(s) =X(s)− C(s)H(s) (9.78)

orE(s) =X(s)−H(s)G(s)E∗(s) (9.79)

188

Page 189: Linear Transforms

Taking the pulsed transform of the last equation, we have

E∗(s) = X∗(s)−HG∗(s)E∗(s) (9.80)

from which

E(s) =X∗(s)

1 +HG∗(s)(9.81)

The output transform C(s) is obtained by substitutingE∗(s) fromEq. (9.81) into Eq. (9.77).

C(s) =G(s)

1 +HG∗(s)X∗(s) (9.82)

The pulsed-transform of c∗(t) is

C∗(s) = G∗(s)E∗(s) =G∗(s)

1 +HG∗(s)(9.83)

Hence the z-transform of c(t) is

C(z) =G(z)

1 +HG(z)X(z) (9.84)

9.7 LIMITATIONS OF THE Z-TRANSFORM METHOD

We have seen that z-transform is a convenient tool for the treatment of discrete systems.However, it has certain limitations and in certain cases care must be taken in its applications.

1. The derivation of z-transform is based on the assumption that the sampled signal is approxi-mated by a train of impulses whose areas are equal to the input time function of the samplerat the sampling instants. This assumption is considered to be valid only if the samplingduration is small, compared to the significant time constant of the system.

2. The z-transform C(z) specifies only the values of the time function c(t) at the samplinginstants. Therefore, for any C(z), the inverse transform c(nT) describes c(t) only at thesampling instants t = nT .

3. In analysing sampled data by z-transform method, it is necessary that the transfer functionG(s) must have at least two more poles than zeros [or g(t) must not have a jump at t = 0];otherwise the system response obtained by the z-transform method is unrealistic or evenincorrect.

9.8 STABILITY ANALYSIS

A sampled-data system is considered to be stable if the sampled output is bounded whenbounded input is applied. However, there may be hidden oscillations between samplinginstants, which may be studied by special methods.

The closed loop transfer function of the sampled-data system in Fig. 9.9 is given as

C ∗(s)

X ∗(s)=

G∗(s)

1 +HG∗(s)(9.85)

where 1 + HG∗(s) = 0 is the characteristic equation of the system. The stability of thesampled data system is entirely determined by the location of the roots of the characteristic

189

Page 190: Linear Transforms

equation. Specifically, none of the roots of the characteristic equation must be found in theright-half of the s-plane, since such a root will yield exponentially growing time functions. Interms of the z-transform, the characteristic equation of the system is written as 1+HG(z) =0. Since the right half of the s-plane is mapped into the exterior of the unit circle in thez-plane, as shown in Fig. 9.10, the stability requirement states that all the roots of thecharacteristic equation must lie inside the unit circle. We will not discuss all the stabilitytechniques in detail, but outline briefly only two methods namely Routh Hurwitz Criterionand Root Locus Method.

1. The Routh-Hurwitz Criterion Applied to Sampled Data System.

The stability of the sampled data system concerns the determination of the location of theroots of the characteristic equation with respect to the unit circle in the z-plane. A convenientmethod is to use bilinear transformation.

r =z+ 1

z− 1

or

z =r + 1

r − 1 (9.86)

Where r is a complex variable; i.e. r = σr + jwr. This transformation maps the interior ofthe unit circle in the z-plane into the left half of the r-plane; therefore, the Routh test maybe performed on the polynomial in the variable r. The following example illustrates how themodified Routh test is performed for a sampled data feedback system.

Example 9.8 Let the open loop transfer function of a unity feedback system with samplederror signal be of the form

G(s) =22.57

s2(s +1)(9.87)

Solution: If the sampling period is 1 sec, the z-transform of G(s) is

G(z) =22.57z(0.368z + 0.264)

(z − 1)2(z− 0.368) (9.88)

The characteristic equation of the system may be written as

z3 +5.94z2 + 7.7z − 0.368 = 0 (9.89)

Substitution of Eq. (9.86) in the last equation yields∙r + 1

r − 1

¸3+5.94

∙r +1

r − 1

¸2+ 7.7

∙r + 1

r − 1

¸− 0.368 = 0 (9.90)

Simplifying Eq. (9.90), we get

14.27r3 +2.3r2 − 11.74r +3.13 = 0 (9.91)

The Routh tabulation of the last equation isr3 14.27 -11.74r2 2.3 3.13r1 −27−44.6

2.3= −31.1 0

r0 3.13

190

Page 191: Linear Transforms
Page 192: Linear Transforms

Since there are two changes of sign in the first column of tabulation, the characteristicequation has two roots in the right half of the r-plane, which corresponds to two rootsoutside the unit circle in the z-plane, and shows that the system is unstable.

2. The Root Locus Technique

The root locus technique used for analysis and design of continuous data system can alsoeasily be adapted to the study of sampled data systems. Since the characteristic equationof a simple sampled data system may be represented by the form

1+HG(Z) = 0 (9.92)

where HG(z) is a rational function in z, the root locus method may be applied directly tothe last equation without modification. The significant difference between the present caseand the continuous case is that the root loci in Eq. (9.92) are constructed in the z-plane, andthat in investigating the stability of sampled data system from the root locus plot, the unitcircle rather than the imaginary axis in the z-plane should be observed. It is clear that, inthe construction of the root loci discussed in Chapter 5 is still valid. The following exampleshows that construction of root loci for a sampled data system.

Example 9.9 Consider a unity feedback control system with sampled error signal, the open-loop transfer function of the system is given as

G(z) =kz(1 − e−T )

(z − 1)(z − e−T )(9.93)

Draw the root loci of the system for T = 1 sec and T = 5 sec.

Solution: The characteristic equation of the system is 1 + G(z) = 0, whose root loci areto be determined when k is varied from 0 to ∞ . If the sampling period T is 1 sec, G(z)becomes

G(z) =0.632kz

(z− 1)(z − 0.368) (9.94)

which has poles at z = 1 , z = 0.368 and a zero at the origin. The pole-zero configurationof G(z) is shown in Fig. 9.11a.The root loci must start at the poles (k = 0) and end at thezeros (k =∞) of G(z). The complete root loci for T = 1 sec intersects with the unit circleoccurs at z = −1 and the corresponding value of k at that point is 4.33.

If the sampling period is changed to T = 5 sec. G(z) becomes

G(z) =0.933kz

(z − 1)(z − 0.0067) (9.95)

The root loci for T = 5 sec are constructed in Fig. 9.11b. The marginal value of k for T = 5sec is found to be 2.02 as compared to the marginal k of 4.33 for T = 1 sec.

192

Page 193: Linear Transforms

PROBLEMS

9.1 The following signals are sampled by an ideal sampler with sampling period T . Determinethe sampler output x∗(t) and evaluate the pulsed transform X∗(s) by the Laplace Transformmethod.

(a) x(t) = te−αt

(b) x(t) = e−at sinωt (a=constant)

9.2 Derive the z-transform for the following functions

(a) 1s3(s + 2)

(b) 1s(s+ s)2

(c) {2.5,−1.2,−0.08, 8.9, 0.4}

(d) (1/4)4 for n > 0 for n > 0

9.3 Evaluate the inverse z-transform of

G(z) =z(z2 + 2z + 1)

(z2 − z + 1)(z2 + z +1)

by the following methods,

(a) the real inversion formula.

(b) the partial fraction expansion.

(c) power series expansion.

9.4 Obtain the inverse z-transform of

G(z) =0.5z

(z − 1)(z− 0.5)

9.5 A digital filter has a pulse transfer function

G(z) =z2 − 0.05z − 0.05z2 + 0.1z − 0.2

Determine:

(a) the location in the z-plane, of the filter’s poles and zeros.

(b) whether or not the filter is stable.

(c) a general expression for the filter’s impulse response.

(d) the filter’s linear difference equation.

(e) initial and final values of the output of the filter for a unit step input.

193

Page 194: Linear Transforms

9.6 The characteristic equation of certain sampled data systems are as follows. Determine thestability of these systems.

(a) z3 + 5z2 + 3z + z2 = 0

(b) 3z5 + 4z4 + z3 + 2z2 + 5z + 1 = 0

(c) z3 − 1.5z2 − 2z+ 3 = 0

9.7 The sampled data system shown below has a transfer function G(s) = Ks(1 + 0.2s)

Sketch the root locus diagram for the system for T = 1sec and 5 sec. Determine the marginalvalue of k for stability in each case.

9.8 Obtain the initial and final value of the following functions.

(a) G(z) = 2(1− z−1)(1− 0.2z−1)

(b) G(z) = 1(1− z−1)(1− 0.5z−1)

9.9 For the open-loop sampled data system given below

G(s) = 100s(s2 +100)

T = 0.1 sec, x(t) = unit step.

Use the z-transform method to evaluate the output response.

194

Page 195: Linear Transforms

CHAPTER X

APPLICATIONS OF Z-TRANSFORM

The method of z-transform is an efficient tool for dealing with the linear difference equations.In the following sections, we will demonstrate its usefulness in the analysis and design ofnetworks, sampled data control systems and digital filters. It may be mentioned that theapplication field of z-transform is not limited to the above areas and with the introductionof digital computers in control and instrumentation, its scope has become almost unlimited.

10.1 Z-TRANSFORM METHOD FOR SOLUTION OF LINEAR

DIFFERENCE EQUATIONS

Several methods, such as the classical method, the matrix method, the recurrence methodand the transform method exist for the solution of difference equations. In this section, weshall apply z-transform method (generating function method) to the solution of certain typeof linear difference equations. The formulation of the difference equation can be expressed inseveral forms such as backward or forward method or the translational form. These equationsare usually encountered in physical, economic and physiological systems. In the following,we formulate the equation of the current in any loop of a ladder network and find its solutionby z-transform method.

Consider the ladder network in Fig. 10.1. Assume that resistances except RL are of thesame value R. Suppose that it is required to find the current in the nth loop. In theclassical approach, we could set up the (k + 1) loop equations and solve for in which wouldbe a cumbersome process. However, by z-transform method, we could formulate one loopequation and two terminal equations. The equation for the (n+ 1)th loop is

−Rin + 3Rin+1 − Rin+2 = 0 (10.1)

Instead of writing down the other K equations, we make the following observations.

1. Eq. (10.1) is true for any n except 000 and 0k 0, since the network is the repetitive structureand all loops except two ends are alike.

2. Eq. (10.1), together with end condition is sufficient to describe the network.

Applying z-transform to Eq. (10.1)

−I(z) + 3zI(z) − 3zi0 − z2I(z) + z2i0 + zi1 = 0

or(1− 3z + z2)I(z) = z(zi0 − 3i0 + i1)

or

I(z) =z(zi0 − 3i0 + i1)

z2 − 3z + 1 (10.2)

From 000 Loop2Ri0 − Ri1 = V (10.3)

or

i1 = 2i0 −V

R(10.4)

Substituting the value of i1 in Eq. (10.2), we get

195

Page 196: Linear Transforms

I (z) =z[zi0 − 3i0 +2i0 − V

R]

z2 − 3z +1

=zhz − (1 + V

Ri0 )

ii0

z2 − 3z + 1 (10.5)

From the tables of inverse z-transform, we readily obtain in as follows.

in = Z−1[I (z)] = i0

"cos hω0n+

1

2− V

Ri0√52

sin hω0n

#(10.6)

Where

cos hω0 =3

2

and

sin hω0 =

√5

2

t = nT = n for T = 1sec. (10.8)

The value of i0 can be found by substitution Eq. (10.6) into the equation of the end loopand solving for i0 .

10.2 SAMPLED DATA CONTROL SYSTEM DESIGN IN THE Z-PLANE

Design and synthesis of sampled data control systems is a subject of control theory, andmethods such as Bode plots, Nyquist plots, Magnitude and Phase plots and Root Locusplots are generally employed and synthesis of control systems is carried out in the z-plane.As the detailed discussion of control theory is beyond the scope of this book and the factthat we intend to demonstrate the application of z-transform in this field, we shall limitthis section to the design and synthesis by root locus method only. This method is chosenbecause of the fact that it was thoroughly discussed in Chapter 5 for continuous systems andthe reader will not find any difficulty in following its application for sampled data systems.However, it may be pointed out that z-transform method is equally useful and applicable inthe design and synthesis by Bode Plots, Nyquist plots, Magnitude and Phase plots either bydirect applications or after bilinear transformations.

10.2.1 Design in the z-Plane by using the Root Locus Method.

The root locus plots which are plotted in the z-plane have quite similar properties to thoseof the root locus for continuous systems in s-plane. Once the root loci of the characteristicequation are plotted in the z-plane, much knowledge concerning the transient response ofthe system can be obtained by observing the location of the roots for the particular loopgain k. In terms of root locus design, the desirable characteristic equation roots are found byreshaping the root loci of the original system through adjustment of the loop gain and the useof the compensation networks. The most elementary problem in the root locus design is thedetermination of the loop gain to yield suitable relative stability. The loop gain of the systemcan be adjusted to give appropriate dynamic performance as measured by the position ofthe complex poles with respect to the constant damping ratio curves inside the unit circle.However, no simple rules are available for the determination of appropriate compensationnetworks from the root locus diagram. Therefore, the design in the z-plane with root locususually involves a certain amount of trial and error.

196

Page 197: Linear Transforms
Page 198: Linear Transforms

In the design of continuous data systems, usually, the design may fall into one of the followingcategories:

(1) phase-lead compensation (2) phase-lag compensation.

10.2.2 Phase-Lead Compensation

A simple phase-lead model on the ω-domain is described by the transfer function

Dω =1+ aτω

1 + τω(a > 1) (10.9)

ω =z − 1z +1

(10.10)

where τ is a constant greater than or equal to zero. This transfer function produces positivephase shift that may be added to the system phase shift in the vicinity of the gain croos-overfrequency (ωω) to increase the phase margin.

The pole-zero configuration of Eq. (10.9) is shown in Fig. 10.2.a. Note that the poles andzero of D(ω) always lie on the negative real axis in the ω-plane with the zero to the right ofthe pole. Substitution of ω = z − 1/z + 1 into Eq. (10.9) yields

D(z) =

∙aτ + 1

τ +1

¸z + 1−aτ

1+aτ

z + 1−τ1+τ

(10.11)

Since τ and a are both positive numbers and since a > 1, the poles and zero of D(z) alwayslie on the real axis on or inside the unit circle in the z-plane; the zero is always to the right ofthe pole. A typical set of pole zero configuration of D(z) is shown in Fig. 10.2b. Illustrativeexample is represented in the following.

Example 10.1

A sampled data feedback control system with digital compensation is shown in Fig. 10.3.The controlled process of the system is described by the transfer function

G1(s) =k

s(s +1)(10.12)

The sampling period is one sec. The open-loop transfer function of the system withoutcompensation is

Gh0G1(z) =0.386k(z + 0.717)

(z− 1)(z − 0.386) (10.13)

The root locus diagram of the uncompensated system is plotted in Fig. 10.4.

Note that the complex conjugate part of the root loci is a circle with centre at z = 0.717and a radius of 1.37. The closed-loop system becomes unstable for all values of K greaterthan 2.43. Let us assume that k is set at this marginal value so that the two characteristicequation roots are on the unit circle as shown in Fig. 10.4.

Suppose that the transfer function D(z) of the digital controller is of the form

D(z) =

∙aτ + 1

τ +1

¸z + 1−aτ

1+aτ

z + 1−τ1+τ

(10.14)

where for phase-lead compensation, a > 1 and ∞ > τ > 0.

198

Page 199: Linear Transforms
Page 200: Linear Transforms

The constant factor (aτ + 1)/(τ + 1) in D(z) is necessary, since the insertion of the digitalcontroller should not effect the velocity error constant kv while improving the stability of thesystem. In other words, D(z) must satisfy the condition

limz→1

D(z) = 1 (10.15)

The design problem now essentially involves the determination of appropriate value of aand τ , so that the system is stabilized. However, at this point, how the values of a and τshould be chosen is not clear. Although, we know from the properties of the root loci thatan added open loop zero has the effect of pulling root loci toward it, whereas an additionalopen loop pole has the tendency to push the loci away. However, no simple ways exist fortelling which combinations are the most effective for stabilizing the system because of theunlimited number of possible combinations of pole and zero of D(z). Several sets of valuesof a and τ are used in the following to illustrate the effects of phase-lead compensation. Asa first trial, let a = 6.06 and τ = 0.165. The transfer function of the digital controller reads.

D(z) = 1.72z

z + 0.717(10.16)

and the open loop transfer function of the compensated system is

D(z)Gh0G1(z) =0.64kz

(z − 1)(z− 0.368) (10.17)

The root loci of the compensated system are shown in Fig. 10.5 as loci (2). It may be seenthat for k = 2.43, one of the roots of the characteristic equation is on the negative real axisoutside the unit circle, and the system is unstable. This shows that for the values chosen fora and τ , the compensated system is worse than the original system.

From other sets of values for a and τ are tried, and the corresponding root loci of thecompensated systems are plotted in Fig. 10.5 (only the positive complex conjugate partsof the root loci are shown). The characteristic equation roots of the compensated systemwhen k = 2.43 are indicated on the loci. The pole and zero locations of D(z) of the variouscompensations are tabulated in Table 10.1. From the root locus diagram, we see that amongthe five compensation only when a = 2, τ = 0.4 and a = 3, τ = 0.1 result in stable systems.But then the damping ratios are less than 10 percent, which means that overshoots wouldexceed 70 percent, which is not acceptable.

The general ineffectiveness of the phase-lead compensation is anticipated in this problem,since the original system is on the verge of instability and the situation is one for which phase-lead compensation is not recommended. In the next section, we shall see that a phase-lagcompensation is more satisfactory for improving the stability of this system.

Table 10.1

Pole and Zero of D(z) =£aτ+1

τ+1

¤ z+ 1−aτ1+aτ

z+1−τ1+τ

τ a zero of D(z) Pole of D(z)

0.1 3 0.538 -0.818

0.165 6.06 0 -0.717

0.4 2 -0.111 -0.375

1.0 2 0.333 0

3.0 3 0.80 0.50

200

Page 201: Linear Transforms
Page 202: Linear Transforms

10.2.3 Phase - Lag Compensation

A simple phase-lag model is given by the transfer function

D(ω) =1 + aτω

1 + τω(10.18)

where 0 < a < 1 and 0 < τ <∞

The Pole-Zero configuration of the phase-lag D(ω) and D(z) are depicted in Fig. 10.6. Notethat since a is less than unity, the pole is always to the right of the zero.

Example 10.2

Consider the same system given in Example 10.1, with k = 2.43. The system is now to bestabilized by means of a simple lag compensator of Eq. (10.18).

First, we investigate the effects of the phase-lag compensation on the root loci when smallvalues of τ are chosen. With a = 0.5, the root loci of the system with phase-lag compensationare plotted for τ = 0.1, 0.4 and 1.0, as shown in Fig. 10.7. (only the positive complexconjugate loci parts are shown). For small values of τ, the phase-lag compensation has madethe system unstable, which indicates that the value of τ should be large.

Let us assume that the design specification requires the damping ratio of the complex closedloop poles to be approximately 60 percent. Referring to locus (1) in Fig. 10.8, which isthe root locus of the original system, we note that the complex closed loop poles have adamping ratio of 60 percent when the loop gain is equal to 0.5. In essence, the phase-lagcompensation can be regarded as a means of increasing the velocity error constant kv by aratio of 4.86(2.43/0.5) while keeping the complex loop poles relatively small. Since τ is tobe very large, and since a is less than unity, the poles and zero of D(z) will appear as anintegrating dipole near the point z = +1. Therefore, the complex conjugate parts of theoriginal root loci are not affected significantly by the addition of the integrating dipole, sincefrom points on these loci the dipole and the pole of Gh0G1(z) at z = 1 appear as a samplepole. In order to increase the velocity error constant by a factor of 4.86 (from k = 0.5 tok = 2.43), the constant ratio a of the integrating dipole should be chosen to be at least1/4.86, preferably 1/5 to allow the dipole to contribute a slight phase lag near the new gaincross-over. Thus we, let a = 0.2 and τ is chosen to be 100. Substituting these values of aand τ into Eq. (10.11) yields the transfer function of the phase-lag controller

D(z) = 0.191z − 0.9050.980

(10.19)

The open-loop transfer function of the compensated system is

D(z)Gn0G1(z) =0.07k(z − 0.905)(z +0.717)(z − 1)(z − 0.368)(z − 0.98) (10.20)

In Fig. 10.8, the root loci of the phase-lag compensated system are shown as loci (2). Notethat the complex roots for K = 2.43 on the compensated loci lie very close to the roots forK = 0.5 on the uncompensated loci.

The root loci of the compensated system when a = 0.2 and τ = 50 are also plotted in Fig.10.8 (loci 3). for K = 2.43, the complex characteristic equation roots lie very close to thoseloci (2). This shows that precise location of the dipole is not critical, as long as it is close tothe z = +1 point.

202

Page 203: Linear Transforms
Page 204: Linear Transforms

10.3 Z-TRANSFORM METHOD FOR THE DESIGN OF DIGITAL FILTERS

There are two types of digital filters namely; recursive and non-recursive filters. In thissection, we will present the design of recursive filters, which are more economical in executiontime and storage requirements as compared to non-recursive filters. Recursive digital filtersare more commonly referred to as infinite impulse response filters. The term recursiveintrinsically means that the output of the digital filter, y(n)T , is computed using the presentinputs x(T ), and previous inputs and outputs, namely x(n− 1)T , x(n− 2)T, . . . , y(n −1)T, y(n − 2)T, . . . respectively.

The design of the recursive filter centres around finding the filter coefficients - the a0s and b0sof G(z), thereby yeilding a pulse transfer function which is a rational function in z. There aretwo main methods for the design of digital filters. The first method is an indirect approach,which requires that a suitable prototype continuous filter transfer function G(s), is designedand subsequently this is transformed via an appropriate s-plane to z-plane mapping to givea corresponding digital filter pulse transfer function, G(z). The mapping used in this sectionwill be the bilinear z-transform though other transformation methods are also available. Thesecond method is a direct approach which is concerned with the z-plane representation ofthe digital filter, and the derivation of G(z) is achieved by working directly in the z-plane.The direct approach is used in the design of frequency sampling filters and filters based onsquared magnitude functions.

10.3.1 Indirect Approach Using Prototype Continuous Filter

The continuous filters e.g. Butterworth and Chebyshev were discussed in Chapter 8. Thegeneral equations for the Butterworth and Chebyshev filters are given below.

|G(jω)|2 = 1

1 + ( ωωc)2n

=1

1 + (−1)ns2n |s=jωωc(Butterworth) (10.21)

and

|G(jω)|2 = 1

1 + 2[Cn(ω)]2(Chebychev) (10.22)

where |G(jω)|2 is the squared magnitude of the filter’s transfer function.ωc is the cutoff frequencyn is the order of the filterω is the frequency

is a real number and << 1Cn(ω) is Chebechev polynomial

The design of digital filter is carried out by using bilinear transformation to the continuousfilter obtained for given specification. In the following, we present one example of a lowpass filter design. It may be mentioned that once a low pass continuous filter is designed, itcan readily be converted into bandpass or highpass continuous filter as already discussed inChapter 8.

Example 10.3

Derive the digital equivalent of a Butterworth low pass filter by bilinear transformation forthe following specifications.

(1) Digital cutoff frequency, fcd = 100 Hz

204

Page 205: Linear Transforms

(2) Sampling period T = 1 ms

(3) Amplitude attenuation of 20 dB at 400 Hz. Obtain the amplitude response of G(z).

Solution:

The order of the filter is determined by the ratio of cutoff frequency and 20dB attenuationfrequency as follows.

20dB = 10 Log10

∙1 + (

ω

ωc

2n

)

¸(10.23)

2 = Log10

∙1 + (

ω

ωc)2n¸

99 = (4)2n

1.9956 = 2n log104n = 1.65 (10.24)

As n has to be an integer, therefore n = 2 shall satisfy the filter specifications. The prototypecontinuous second order filter is given by

G(s) =1

s2 +√2s+ 1

(10.25)

The analog cutoff frequency is obtained by the following equation

ωca =2

Ttan

∙ωcdT

2

¸(10.26)

=2

1X10−3tan

∙200πx

1X10−3

2

¸= 650 rad/sec (10.27)

The transformation from normalised low pass to low pass filter is achieved by submittings/ωca for s in G(s) as follows

G(s)pωt =1£

s

650

¤2+√2s

650+ 1

(10.28)

(the pre-warped transformed transfer function)

G(s)pωt =422500

s2 + 919.24s+ 422500(10.29)

For the bilinear z-transform

s =2

T

∙(z − 1)(z +1)

¸= 2000

∙(z − 1)(z + 1)

¸(10.30)

And it follows that

s2 = 4x106∙(z − 1)2(z +1)2

¸(10.31)

205

Page 206: Linear Transforms

Substituting Eqs. (10.30) and (10.31) in Eq. (10.29), we obtain the digital filter pulsetransfer function as follows.

G(z) =422500

4x106 (z−1)2

(z+1)2+ 1838480(z−1)

(z+1)+422500

=422500(z + 2.z2 + 1)

6260980z2 − 715500z + 2584020

or

G(z) =z2 + 2z + 1

14.82z2 − 16.935z + 6.16 (10.32)

The frequency response of G(z) can be obtained by substituting z = ejωT in Eq. (10.32)which results

G(ejωT ) =ej2ωT + ejωT + 1

14.82ej2ωT .2 − 16.395ejωT + 6.116 (10.33)

=cos 2ωT + jsni2ωT + 2 cos ωT + j25inωT + 1

14.82 cos 2ωT + j14.82sn2ωT − 16.935 cos ωT − j16.935sin ωT + 6.116(10.34)

orG(ejωT ) = A

A =(cos 2ωT + 2 cos ωT +1) + j(sn2ωT +2 sin ωT )

(14.82 cos 2ωt− 16.935 cos ωt+ 6.116) + j(14.82sin 2ωT − 16.935 sinωt) (10.35)

The amplitude vs. frequency for Eq. (10.35) can be obtained for T = 1 msec. and is shownin Fig. 10.9

10.3.2 Direct Approach Using Squared Magnitude Functions

A direct approach to the design of digital filters is to derive G(z) working in the z-plane.When designing digital filters using this direct approach, we seek functions that produce halfof the poles within the unit circle, the other half being outside. These functions are knownas ”mirror image polynomials” (MIPs).

Consider the magnitude squared function defined as

|G(ejωT )|2 = 1

1 + [Fn(ωT )]2(10.36)

We need suitable trigonometric functions for [Fn(ωT )]2, such that on substitution z = ejωT

an MIP in the z-plane results. One such function is cos(ωT/2), that is

cos2ωT

2=1

2(1 + cos ωT )

=1

2

∙1 +

1

2(ejωT + e−jωT )

¸=1

2

∙1 +

1

2(z + z−1)

¸(10.37)

therefore

cos2ωT

2=(z + 1)2

4z(10.38)

206

Page 207: Linear Transforms

similarly

sin2 ωT

2=(z − 1)2−4z (10.39)

now consider

|G(ejωT )|2 = 1

1 +hsin2 (ωT2 )

sin2( ωcT2 )

i2 (10.40)

where ωc is the desired angular cutoff frequency. Substituting z = ejωT yields

|G(z)|2 = qn

qn + pn(10.41)

where q = sin2(ωcT/2) and p = (z − 1)2/− 4z. The roots in the p-plane occur on a circle ofradius q. Thus pk = qejφk, k = 0, 1, 2 . . . , (n− 1) and

φk =(2k + 1)π

nfor n even (10.42)

φk =2kπ

nfor n odd (10.43)

Having solved for p, we can find the corresponding factors in the z-plane by solving p =(z − 1)2/ − 4z, that is

−4pz = (z − 1)2 = z2 − 2z + 1 (10.44)

thereforez2 − 2z(1 − 2p) + 1 = 0

orz = (1− 2p)± (4p(p− 1)) (10.45)

Hence it is seen that for every root in the p-plane, there will be two corresponding roots inthe z-plane, as given in Eq. (10.45).

Example 10.4

Consider the specifications of the low pass filter in Example 10.3 and design the digital filterby direct method.

Solution:

q = sin2∙ωcT

2

¸=1

2(1− cos ωcT ) =

1

2(1− cos

2πx100

1000) = 0.0955

p = sin2∙ωT

2

¸=1

2(1− cos ωT ) =

1

2(1− cos

2πx400

1000) = 0.9045, ω = 400x2π

|G(z)|2 = qn

qn + pn=

1

1 +hp

q

in = 1

1 +£0.9045

0.0955

¤n = 1

1 + (9.47)n, at ω = 400x2π (10.46)

207

Page 208: Linear Transforms

20 = 10 Log10[1 + (9.47)n] (10.47)

100 = 1 + (9.47)n]

89 = (9.47)n

n = 2 (10.48)

Hence, we will choose a second order digital filter. Therefore k = 0 and k = 1, and φ0 =π

2

and φ1 = 3π2

Therefore

P0 = 0.0955 6π

2andP1 = 0.0955 6

2(10.49)

Now applying Eq. (10.45).

z0 = 0.0955 6π

2±n4(.0955 6

π

2)[(0.0955 6

π

2− 1]1/2

o1/2(10.50)

= (1 − j0.191)± (0.417 + j0.159)(10.50)

Thereforez0 = 0.583 + j0.268 (inside the unit-circle)

andz0 = 1.417 − j0.649 (outside the uni-circle)

Similarly

zi =

∙1 − 0.0955 6 3π

2

¸½4(0.0955 6

2)[0.0955 6

2− 1]

¾1/2

(10.51)

= (1 + j0.191) ± (0.417 + j0.459) (10.52)

Therefore

z1 = 0.583− j0.268 (inside the unit circle)

z1 = 1.417 + j0.649 (outside the unit circle)

For stability we use the poles which are inside the unit circle, therefore we obtain

G(z) =1

[z − (0.583 − j0.268)][z − (0.583 + j0.268)](10.53)

Now taking |G(jω)| = 1 at ω = 0, then z = ejωT = 1.

Therefore

|G(1)| = |1 [1− (0.583− j0.268)][1− (0.583 + j0.268)]| (10.54)

Hence

G(z) =0.246

[(z − (0.583− j0.268)][(z − (0.583 + j0.268)](10.55)

=0.246

z2 − 1.166z + 0.4117 (10.56)

208

Page 209: Linear Transforms

The frequency response of this filter can be obtained by substituting z = ejωT .

Thus

G(ejωT ) =0.246

ej2ωT − 1.166ejωT0.4117 (10.57)

=0.246

cos 2ωT + j sin 2ωT − 1.166 cos ωT − j1.166sin ωT + 0.4117

=0.246

(cos 2ωT − 1.166 cos ωT + 0.477)j(sin 2ωt− 1.66sin ωt) (10.58)

The frequency for Eq. (10.58) can now be plotted for T = 1 msec and is shown in Fig. 10.10.

Table 10.1 may be used to frequency transform from low pass filters to high pass, band passand stop filters. Also note that it is possible to transform from low pass to low pass, thatis a shift of cutoff frequency. Furthermore, note that in using Table 10.2 β(rad/sec) is thedesired cutoff frequency, ω1 and ω2 are the lower and upper cutoff frequencies respectivelyand T is the sampling period.

Table 10.2

Frequency transformation used with Direct Design Methods.

Filter Substitute for z Design Formulae

Lowpass 1 − azz − a a = sin(β − ωc)T/2

sin(β + ωc)T/2

Highpass −h1 + azz+ a

ia =

cos(β − ωc)T/2cos(β + ωc)T/2

1− 2abz(b+1)

+z2(b− 1)(b+ 1)

a =cos(ω2 + ω1)T/2cos(ω2 − ω1)T/2

Bandpasshb− 1b+ 1

i− 2abz(b+ 1)

+ z2 b =cos(ω2 − ω1)T/2

1

tan βT/2z2(1− b) a = Same as above

Bandstop

1− 2az(b+ 1)h

1− bb+ 1

i− 2az(b+ 1)

+z2b =

tan(ω2 − ω1)T/ω1

tanβT/2

209

Page 210: Linear Transforms
Page 211: Linear Transforms

PROBLEMS

10.1. An error-sampled control system has the block diagram shown in Fig. 10p-1. The transferfunction of the controlled process is

G1(s) =K

s(s+ 1)(s + 2)

The sampling period is 0.5 sec.

(a) Sketch the root loci in the z-plane as a function of k.

(b) What is the marginal value of k for stability.

(c) When k is set at the marginal value for stability, design a digital compensator, so that thedamping ratio of the closed loop control poles is equal to 0.707.

10.2. A sampled data feedback control system with digital compensation is as shown in Fig. 10p-1.The transfer function of the controlled process is

G1(s) =k(s+ 1)

s(s+ 2)(s + 3)

The sampling period is 1 sec.

(a) Sketch the root loci in the z-plane as a function of k

(b) What is the marginal value of k for stability.

(c) When k is set at the marginal value for stability, design a digital compensator so that thedamping ratio of the closed loop poles is equal to 0.65.

10.3. (a) Find the magnitude squared function of the

G(z) =1 + z−1

1 + 0.5z−1 + 0.5z−2

(b) Construct a pole-zero diagram of G(z).

(c) Construct the pole-zero diagram of magnitude squared function of G(z).

10.4. Consider the transfer function of an analog filter.

G(s) =1

s2 +√2s+ 1

Let the sampling period be 0.5 sec. Find the corresponding digital transfer function bybilinear transformation method. Also, find the pole-zero diagram of the resultant transferfunction and sketch the magnitude characteristics.

10.5. Suppose that a low-pass Butterworth filter is desired to satisfy the following requirements.

(i) The 3dB cutoff point is at θc = 0.1 π rad.

(ii) The 10dB attenuation period.

211

Page 212: Linear Transforms

(iii) The sampling period T is 10π sec. (note that ωc = θc/T and ω = θ/T ).

Find

(a) The order of the Butterworth low pass filter.

(b) The digital equivalent filter by bilinear transformation.

(c) The magnitude vs. frequency plot.

10.6 A lowpass digital filter is required to have 3dB attenuation at 2kHz and at least 20dBattenuation at 5kHz. Using the direct approach of squard magnitude functions, derive G(z)to satisfy the above specifications. Take sampling frequency as 20kHz, Sketch the magnitudevs. frequency plot.

212

Page 213: Linear Transforms
Page 214: Linear Transforms