1. Contents PART I Signals and Systems 1 Fourier Series,
Fourier Transforms, and the DFT W. Kenneth Jenkins 2 Ordinary
Linear Differential and Difference Equations B.P. Lathi 3 Finite
Wordlength Effects Bruce W. Bomar PART II Signal Representation and
Quantization 4 On Multidimensional Sampling Ton Kalker 5
Analog-to-Digital Conversion Architectures Stephen Kosonocky and
Peter Xiao 6 Quantization of Discrete Time Signals Ravi P.
Ramachandran PART III Fast Algorithms and Structures 7 Fast Fourier
Transforms: A Tutorial Review and a State of the Art P. Duhamel and
M. Vetterli 8 Fast Convolution and Filtering Ivan W. Selesnick and
C. Sidney Burrus 9 Complexity Theory of Transforms in Signal
Processing Ephraim Feig 10 Fast Matrix Computations Andrew E. Yagle
11 Digital Filtering Lina J. Karam, James H. McClellan, Ivan W.
Selesnick, and C. Sidney Burrus PART V Statistical Signal
Processing 12 Overview of Statistical Signal Processing Charles W.
Therrien 13 Signal Detection and Classication Alfred Hero 14
Spectrum Estimation and Modeling Petar M. Djuric and Steven M. Kay
15 Estimation Theory and Algorithms: From Gauss to Wiener to Kalman
Jerry M. Mendel 16 Validation, Testing, and Noise Modeling Jitendra
K. Tugnait 17 Cyclostationary Signal Analysis Georgios B. Giannakis
PART VI Adaptive Filtering 18 Introduction to Adaptive Filters
Scott C. Douglas 19 Convergence Issues in the LMS Adaptive Filter
Scott C. Douglas and Markus Rupp 20 Robustness Issues in Adaptive
Filtering Ali H. Sayed and Markus Rupp 21 Recursive Least-Squares
Adaptive Filters Ali H. Sayed and Thomas Kailath 22 Transform
Domain Adaptive Filtering W. Kenneth Jenkins and Daniel F. Marshall
23 Adaptive IIR Filters Geoffrey A. Williamson 24 Adaptive Filters
for Blind Equalization Zhi Ding c 1999 by CRC Press LLC
2. PART VII Inverse Problems and Signal Reconstruction 25
Signal Recovery from Partial Information Christine Podilchuk 26
Algorithms for Computed Tomography Gabor T. Herman 27 Robust Speech
Processing as an Inverse Problem Richard J. Mammone and Xiaoyu
Zhang 28 Inverse Problems, Statistical Mechanics and Simulated
Annealing K. Venkatesh Prasad 29 Image Recovery Using the EM
Algorithm Jun Zhang and Aggelos K. Katsaggelos 30 Inverse Problems
in Array Processing Kevin R. Farrell 31 Channel Equalization as a
Regularized Inverse Problem John F. Doherty 32 Inverse Problems in
Microphone Arrays A.C. Surendran 33 Synthetic Aperture Radar
Algorithms Clay Stewart and Vic Larson 34 Iterative Image
Restoration Algorithms Aggelos K. Katsaggelos PART VIII Time
Frequency and Multirate Signal Processing 35 Wavelets and Filter
Banks Cormac Herley 36 Filter Bank Design Joseph Arrowood, Tami
Randolph, and Mark J.T. Smith 37 Time-Varying Analysis-Synthesis
Filter Banks Iraj Sodagar 38 Lapped Transforms Ricardo L. de
Queiroz PART IX Digital Audio Communications 39 Auditory
Psychophysics for Coding Applications Joseph L. Hall 40 MPEG
Digital Audio Coding Standards Peter Noll 41 Digital Audio Coding:
Dolby AC-3 Grant A. Davidson 42 The Perceptual Audio Coder (PAC)
Deepen Sinha, James D. Johnston, Sean Dorward, and Schuyler R.
Quackenbush 43 Sony Systems Kenzo Akagiri, M.Katakura, H. Yamauchi,
E. Saito, M. Kohut, Masayuki Nishiguchi, and K. Tsutsui PART X
Speech Processing 44 Speech Production Models and Their Digital
Implementations M. Mohan Sondhi and Juergen Schroeter 45 Speech
Coding Richard V. Cox 46 Text-to-Speech Synthesis Richard Sproat
and Joseph Olive 47 Speech Recognition by Machine Lawrence R.
Rabiner and B. H. Juang 48 Speaker Verication Sadaoki Furui and
Aaron E. Rosenberg 49 DSP Implementations of Speech Processing Kurt
Baudendistel 50 Software Tools for Speech Research and Development
John Shore PART XI Image and Video Processing 51 Image Processing
Fundamentals Ian T. Young, Jan J. Gerbrands, and Lucas J. van Vliet
52 Still Image Compression Tor A. Ramstad 53 Image and Video
Restoration A. Murat Tekalp 54 Video Scanning Format Conversion and
Motion Estimation Gerard de Haan c 1999 by CRC Press LLC
3. 55 Video Sequence Compression Osama Al-Shaykh, Ralph Neff,
David Taubman, and Avideh Zakhor 56 Digital Television Kou-Hu Tzou
57 Stereoscopic Image Processing Reginald L. Lagendijk, Ruggero
E.H. Franich, and Emile A. Hendriks 58 A Survey of Image Processing
Software and Image Databases Stanley J. Reeves 59 VLSI
Architectures for Image Communications P. Pirsch and W. Gehrke PART
XII Sensor Array Processing 60 Complex Random Variables and
Stochastic Processes Daniel R. Fuhrmann 61 Beamforming Techniques
for Spatial Filtering Barry Van Veen and Kevin M. Buckley 62
Subspace-Based Direction Finding Methods Egemen Gonen and Jerry M.
Mendel 63 ESPRIT and Closed-Form 2-D Angle Estimation with Planar
Arrays Martin Haardt, Michael D. Zoltowski, Cherian P. Mathews, and
Javier Ramos 64 A Unied Instrumental Variable Approach to Direction
Finding in Colored Noise Fields P. Stoica, M. Viberg, M. Wong, and
Q. Wu 65 Electromagnetic Vector-Sensor Array Processing Arye
Nehorai and Eytan Paldi 66 Subspace Tracking R.D. DeGroat, E.M.
Dowling, and D.A. Linebarger 67 Detection: Determining the Number
of Sources Douglas B. Williams 68 Array Processing for Mobile
Communications A. Paulraj and C. B. Papadias 69 Beamforming with
Correlated Arrivals in Mobile Communications Victor A.N. Barroso
and Jose M.F. Moura 70 Space-Time Adaptive Processing for Airborne
Surveillance Radar Hong Wang PART XIII Nonlinear and Fractal Signal
Processing 71 Chaotic Signals and Signal Processing Alan V.
Oppenheim and Kevin M. Cuomo 72 Nonlinear Maps Steven H. Isabelle
and Gregory W. Wornell 73 Fractal Signals Gregory W. Wornell 74
Morphological Signal and Image Processing Petros Maragos 75 Signal
Processing and Communication with Solitons Andrew C. Singer 76
Higher-Order Spectral Analysis Athina P. Petropulu PART XIV DSP
Software and Hardware 77 Introduction to the TMS320 Family of
Digital Signal Processors Panos Papamichalis 78 Rapid Design and
Prototyping of DSP Systems T. Egolf, M. Pettigrew, J. Debardelaben,
R. Hezar, S. Famorzadeh, A. Kavipurapu, M. Khan, Lan-Rong Dung, K.
Balemarthy, N. Desai, Yong-kyu Jung, and V. Madisetti c 1999 by CRC
Press LLC
4. To our families c 1999 by CRC Press LLC
5. Preface Digital Signal Processing (DSP) is concerned with
the theoretical and practical aspects of representing information
bearing signals in digital form and with using computers or special
purpose digital hardware either to extract that information or to
transform the signals in useful ways. Areas where digital signal
processing has made a signicant impact include telecommunications,
man-machine communications, computer engineering, multimedia
applications, medical technology, radar and sonar, seismic data
analysis, and remote sensing, to name just a few.
Duringtherstfteenyearsofitsexistence,
theeldofDSPsawadvancementsinthebasictheoryof discrete-time signals
and processing tools. This work included such topics as fast
algorithms, A/D and D/A conversion, and digital lter design. The
past fteen years has seen an ever quickening growth of DSP in
application areas such as speech and acoustics, video, radar, and
telecommunications. Much of this interest in using DSP has been
spurred on by developments in computer hardware and
microprocessors. Digital Signal Processing Handbook CRCnetBASE is
an attempt to capture the entire range of DSP: from theory to
applications from algorithms to hardware. Given the widespread use
of DSP, a need developed for an authoritative reference, written by
some ofthetopexpertsintheworld.
Thisneedwastoprovideinformationonboththeoreticalandpractical issues
suitable for a broad audience ranging from professionals in
electrical engineering, computer science,
andrelatedengineeringelds, tomanagersinvolvedindesignandmarketing,
andtograduate students and scholars in the eld. Given the large
number of excellent introductory texts in DSP, it was also
important to focus on topics useful to the engineer or scholar
without overemphasizing those aspects that are already widely
accessible. In short, we wished to create a resource that was
relevant to the needs of the engineering community and that will
keep them up-to-date in the DSP eld. A task of this magnitude was
only possible through the cooperation of many of the foremost DSP
researchers and practitioners. This collaboration, over the past
three years, has resulted in a CD-ROM containing a comprehensive
range of DSP topics presented with a clarity of vision and a depth
of coverage that is expected to inform, educate, and fascinate the
reader. Indeed, many of the articles, written by leaders in their
elds, embody unique visions and perceptions that enable a quick,
yet thorough, exposure to knowledge garnered over years of
development. As with other CRC Press handbooks, we have attempted
to provide a balance between essential information, background
material, technical details, and introduction to relevant standards
and software. The Handbook pays equal attention to theory,
practice, and application areas. Digital Signal Processing Handbook
CRCnetBASE can be used in a number of ways. Most users will look up
a topic of interest by using the powerful search engine and then
viewing the applicable chapters. As such, each chapter has been
written to stand alone and give an overview of its subject matter
while providing key references for those interested in learning
more. Digital Signal Processing Handbook CRCnetBASE can also be
used as a reference book for graduate classes, or as supporting
material for continuing education courses in the DSP area.
Industrial organizations may wish to provide the CD-ROM with their
products to enhance their value by providing a standard and
up-to-date reference source.
Wehavebeenveryimpressedwiththequalityofthiswork,
whichisdueentirelytothecontributions of all the authors, and we
would like to thank them all. The Advisory Board was instrumental
in helping to choose subjects and leaders for all the sections.
Being experts in their elds, the section leaders provided the
vision and eshed out the contents for their sections. c 1999 by CRC
Press LLC
6. Finally, the authors produced the necessary content for this
work. To them fell the challenging task of writing for such a broad
audience, and they excelled at their jobs. In addition to these
technical contributors, we wish to thank a number of outstanding
individuals whose administrative skills made this project possible.
Without the outstanding organizational skills of Elaine M. Gibson,
this handbook may never have been nished. Not only did Elaine
manage the paperwork, but she had the unenviable task of reminding
authors about deadlines and pushing them to nish. We also thank a
number of individuals associated with the CRC Press Handbook Series
over a period of time, especially Joel Claypool, Dick Dorf, Kristen
Maus, Jerry Papke, Ron Powers, Suzanne Lassandro, and Carol
Whitehead. We welcome you to this handbook, and hope you nd it
worth your interest. Vijay K. Madisetti and Douglas B. Williams
Center for Signal and Image Processing School of Electrical and
Computer Engineering Georgia Institute of Technology Atlanta,
Georgia c 1999 by CRC Press LLC
7. Editors Vijay K. Madisetti is an Associate Professor in the
School of Electrical and Computer Engineering at Georgia Institute
of Technology in Atlanta. He teaches undergraduate and graduate
courses in signal processing and computer engineering, and is
afliated with the Center for Signal and Image Processing (CSIP) and
the Microelectronics Research Center (MiRC) on campus. He received
his B. Tech (honors) from the Indian Institute of Technology (IIT),
Kharagpur, in 1984, and his Ph.D. from the University of California
at Berkeley, in 1989, in electrical engineering and computer
sciences. Dr. Madisetti is active professionally in the area of
signal processing, having served as an Associate Editor of the IEEE
Transactions on Circuits and Systems II, the International Journal
in Computer Simulation, and the Journal of VLSI Signal Processing.
He has authored, co-authored, or edited six books in the areas of
signal processing and computer engineering, including VLSI Digital
Signal Processors (IEEE Press, 1995), Quick-Turnaround ASIC Design
in VHDL (Kluwer, 1996), and a CD- ROM tutorial on VHDL (IEEE
Standards Press, 1997). He serves as the IEEE Press Signal
Processing Society liaison, and is counselor to Georgia Techs IEEE
Student Chapter, which is one of the largest in the world with over
600 members in 1996. Currently, he is serving as the Technical
Director of DARPAs RASSP Education and Facilitation program, a
multi-university/industry effort to develop a new digital systems
design education curriculum. Dr. Madisetti is a frequent consultant
to industry and the U.S. government, and also serves as the
President and CEO of VP Technologies, Inc., Marietta, GA., a
corporation that specializes in rapid prototyping, virtual
prototyping, and design of embedded digital systems. Dr. Madis-
ettis home page URL is at
http://www.ee.gatech.edu/users/215/index.html, and he can be
reached at [email protected]. c 1999 by CRC Press LLC
8. Editors Douglas B. Williams received the B.S.E.E. degree
(summa cum laude), the M.S. degree, and the Ph.D. degree, in
electrical and computer engineering from Rice University, Houston,
Texas in 1984, 1987, and 1989, respectively. In 1989, he joined the
faculty of the School of Electrical and Computer Engineering at the
Georgia Institute of Technology, Atlanta, Georgia, where he is
currently an Associate Professor. There he is also afliated with
the Center for Signal and Image Processing (CSIP) and teaches
courses in signal processing and telecommunications. Dr. Williams
has served as an Associate Editor of the IEEE Transactions on
Signal Processing and was on the conference committee for the 1996
International Conference on Acoustics, Speech, and Signal
Processing that was held in Atlanta. He is currently the faculty
counselor for Georgia Techs student chapter of the IEEE Signal
Processing Society. He is a member of the Tau Beta Pi, Eta Kappa
Nu, and Phi Beta Kappa honor societies. Dr. Williamss current
research interests are in statistical signal processing with
emphasis on radar signal processing, communications systems, and
chaotic time-series analysis. More information on his activities
may be found on his home page at
http://dogbert.ee.gatech.edu/users/276. He can also be reached at
[email protected]. c 1999 by CRC Press LLC
9. I Signals and Systems Vijay K. Madisetti Georgia Institute
of Technology Douglas B. Williams Georgia Institute of Technology 1
Fourier Series, Fourier Transforms, and the DFT W. Kenneth Jenkins
Introduction Fourier Series Representation of Continuous Time
Periodic Signals The Classical Fourier Transform for Continuous
Time Signals The Discrete Time Fourier Transform The Discrete
Fourier Transform Family Tree of Fourier Transforms Selected
Applications of Fourier Methods Summary 2 Ordinary Linear
Differential and Difference Equations B.P. Lathi Differential
Equations Difference Equations 3 Finite Wordlength Effects Bruce W.
Bomar Introduction Number Representation Fixed-Point Quantization
Errors Floating-Point Quan- tization Errors Roundoff Noise Limit
Cycles Overow Oscillations Coefcient Quantization Error Realization
Considerations T
HESTUDYOFSIGNALSANDSYSTEMShasformedacornerstoneforthedevelopmentof
digital signal processing and is crucial for all of the topics
discussed in this Handbook. While the reader is assumed to be
familiar with the basics of signals and systems, a small portion is
reviewed in this chapter with an emphasis on the transition from
continuous time to discrete time. The reader wishing more
background may nd in it any of the many ne textbooks in this area,
for example [1]-[6]. In the chapter Fourier Series, Fourier
Transforms, and the DFT by W. Kenneth Jenkins, many important
Fourier transform concepts in continuous and discrete time are
presented. The discrete Fourier transform (DFT), which forms the
backbone of modern digital signal processing as its most common
signal analysis tool, is also described, together with an
introduction to the fast Fourier transform algorithms. In Ordinary
Linear Differential and Difference Equations, the author, B.P.
Lathi, presents a detailed tutorial of differential and difference
equations and their solutions. Because these equations are the most
common structures for both implementing and modelling systems, this
background is necessary for the understanding of many of the later
topics in this Handbook. Of particular interest are a number of
solved examples that illustrate the solutions to these
formulations. c 1999 by CRC Press LLC
10. While most software based on workstations and PCs is
executed in single or double precision arithmetic, practical
realizations for some high throughput DSP applications must be
implemented in xed point arithmetic. These low cost implementations
are still of interest to a wide community in the consumer
electronics arena. The chapter Finite Wordlength Effects by Bruce
W. Bomar describes basic number representations, xed and oating
point errors, roundoff noise, and practical considerations for
realizations of digital signal processing applications, with a
special emphasis on ltering. References [1] Jackson, L.B., Signals,
Systems, and Transforms, Addison-Wesley, Reading, MA, 1991. [2]
Kamen, E.W. and Heck, B.S., Fundamentals of Signals and Systems
Using MATLAB, Prentice-Hall, Upper Saddle River, NJ, 1997. [3]
Oppenheim, A.V. and Willsky, A.S., with Nawab, S.H., Signals and
Systems, 2nd Ed., Prentice-Hall, Upper Saddle River, NJ, 1997. [4]
Strum,R.D.andKirk,D.E.,
ContemporaryLinearSystemsUsingMATLAB,PWSPublishing,Boston, MA,
1994. [5] Proakis, J.G. and Manolakis, D.G., Introduction to
Digital Signal Processing, Macmillan, New York; Collier Macmillan,
London, 1988. [6] Oppenheim, A.V. and Schafer, R.W., Discrete Time
Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1989. c
1999 by CRC Press LLC
11. 1 Fourier Series, Fourier Transforms, and the DFT W.
Kenneth Jenkins University of Illinois, Urbana-Champaign 1.1
Introduction 1.2 Fourier Series Representation of Continuous Time
Periodic Signals Exponential Fourier Series The Trigonometric
Fourier Series Convergence of the Fourier Series 1.3 The Classical
Fourier Transform for Continuous Time Signals Properties of the
Continuous Time Fourier Transform Fourier Spectrum of the
Continuous Time Sampling Model Fourier Transform of Periodic
Continuous Time Signals The Generalized Complex Fourier Transform
1.4 The Discrete Time Fourier Transform Properties of the Discrete
Time Fourier Transform Relation- ship between the Continuous and
Discrete Time Spectra 1.5 The Discrete Fourier Transform Properties
of the Discrete Fourier Series Fourier Block Pro- cessing in
Real-Time Filtering Applications Fast Fourier Transform Algorithms
1.6 Family Tree of Fourier Transforms 1.7 Selected Applications of
Fourier Methods Fast Fourier Transform in Spectral Analysis Finite
Impulse Response Digital Filter Design Fourier Analysis of Ideal
and Practical Digital-to-Analog Conversion 1.8 Summary References
1.1 Introduction Fourier methods are commonly used for signal
analysis and system design in modern telecommu- nications, radar,
and image processing systems. Classical Fourier methods such as the
Fourier series and the Fourier integral are used for continuous
time (CT) signals and systems, i.e., systems in which a
characteristic signal, s(t), is dened at all values of t on the
continuum < t < . A more recently developed set of Fourier
methods, including the discrete time Fourier transform (DTFT) and
the discrete Fourier transform (DFT), are extensions of basic
Fourier concepts that apply to discrete time (DT) signals. A
characteristic DT signal, s[n], is dened only for values of n where
n is an integer in the range < n < . The following discussion
presents basic concepts and outlines important properties for both
the CT and DT classes of Fourier methods, with a particular
emphasis on the relationships between these two classes. The class
of DT Fourier methods is particularly useful c 1999 by CRC Press
LLC
12. as a basis for digital signal processing (DSP) because it
extends the theory of classical Fourier analysis to DT signals and
leads to many effective algorithms that can be directly implemented
on general computers or special purpose DSP devices.
TherelationshipbetweentheCTandtheDTdomainsischaracterizedbytheoperationsofsampling
and reconstruction. If sa(t) denotes a signal s(t) that has been
uniformly sampled every T seconds, then the mathematical
representation of sa(t) is given by sa(t) = n= s(t)(t nT ) (1.1)
where (t) is a CT impulse function dened to be zero for all t = 0,
undened at t = 0, and has unit area when integrated from t = to t =
+. Because the only places at which the product s(t)(t nT ) is not
identically equal to zero are at the sampling instances, s(t) in
(1.1) can be replaced with s(nT ) without changing the overall
meaning of the expression. Hence, an alternate expression for sa(t)
that is often useful in Fourier analysis is given by sa(t) = n=
s(nT )(t nT ) (1.2) The CT sampling model sa(t) consists of a
sequence of CT impulse functions uniformly spaced at intervalsofT
secondsandweightedbythevaluesofthesignals(t)atthesamplinginstants,
asdepicted in Fig. 1.1. Note that sa(t) is not dened at the
sampling instants because the CT impulse function itself is not
dened at t = 0. However, the values of s(t) at the sampling
instants are imbedded as area under the curve of sa(t), and as such
represent a useful mathematical model of the sampling process. In
the DT domain the sampling model is simply the sequence dened by
taking the values of s(t) at the sampling instants, i.e., s[n] =
s(t)|t=nT (1.3) In contrast to sa(t), which is not dened at the
sampling instants, s[n] is well dened at the sampling instants, as
illustrated in Fig. 1.2. Thus, it is now clear that sa(t) and s[n]
are different but equivalent models of the sampling process in the
CT and DT domains, respectively. They are both useful for signal
analysis in their corresponding domains. Their equivalence is
established by the fact that they have equal spectra in the Fourier
domain, and that the underlying CT signal from which sa(t) and s[n]
are derived can be recovered from either sampling representation,
provided a sufciently large sampling rate is used in the sampling
operation (see below). 1.2 Fourier Series Representation of
Continuous Time Periodic Signals It is convenient to begin this
discussion with the classical Fourier series representation of a
periodic time domain signal, and then derive the Fourier integral
from this representation by nding the limit of the Fourier
coefcient representation as the period goes to innity. The
conditions under which a periodic signal s(t) can be expanded in a
Fourier series are known as the Dirichet conditions. They require
that in each period s(t) has a nite number of discontinuities, a
nite number of maxima and minima, and that s(t) satises the
following absolute convergence criterion [1]: T/2 T/2 |s(t)| dt
< (1.4) It is assumed in the following discussion that these
basic conditions are satised by all functions that will be
represented by a Fourier series. c 1999 by CRC Press LLC
13. FIGURE 1.1: CT model of a sampled CT signal. FIGURE 1.2: DT
model of a sampled CT signal. 1.2.1 Exponential Fourier Series If a
CT signal s(t) is periodic with a period T , then the classical
complex Fourier series representation of s(t) is given by s(t) = n=
anejn0t (1.5a) where 0 = 2/T , and where the an are the complex
Fourier coefcients given by an = (1/T ) T/2 T/2 s(t)ejn0t dt (1.5b)
It is well known that for every value of t where s(t) is
continuous, the right-hand side of (1.5a) converges to s(t). At
values of t where s(t) has a nite jump discontinuity, the
right-hand side of (1.5a) converges to the average of s(t ) and
s(t+), where s(t) lim 0 s(t ) and s(t+) lim 0 s(t + ). For example,
the Fourier series expansion of the sawtooth waveform illustrated
in Fig. 1.3 is char- acterized by T = 2, 0 = 1, a0 = 0, and an = an
= A cos(n)/(jn) for n = 1, 2, . . .,. The coefcients of the
exponential Fourier series represented by (1.5b) can be interpreted
as the spec- tral representation of s(t), because the an-th
coefcient represents the contribution of the (n0)-th frequency to
the total signal s(t). Because the an are complex valued, the
Fourier domain represen- c 1999 by CRC Press LLC
14. tation has both a magnitude and a phase spectrum. For
example, the magnitude of the an is plotted in Fig. 1.4 for the
sawtooth waveform of Fig. 1.3. The fact that the an constitute a
discrete set is consistent with the fact that a periodic signal has
a line spectrum, i.e., the spectrum contains only integer multiples
of the fundamental frequency 0. Therefore, the equation pair given
by (1.5a) and (1.5b) can be interpreted as a transform pair that is
similar to the CT Fourier transform for periodic signals. This
leads to the observation that the classical Fourier series can be
interpreted as a special transform that provides a one-to-one
invertible mapping between the discrete-spectral domain and the CT
domain. The next section shows how the periodicity constraint can
be removed to produce the more general classical CT Fourier
transform, which applies equally well to periodic and aperiodic
time domain waveforms. FIGURE 1.3: Periodic CT signal used in
Fourier series example. FIGURE 1.4: Magnitude of the Fourier
coefcients for example of Figure 1.3. 1.2.2 The Trigonometric
Fourier Series Although Fourier series expansions exist for complex
periodic signals, and Fourier theory can be generalized to the case
of complex signals, the theory and results are more easily
expressed for real- valued signals. The following discussion
assumes that the signal s(t) is real-valued for the sake of
simplifying the discussion. However, all results are valid for
complex signals, although the details of the theory will become
somewhat more complicated. Forreal-valuedsignalss(t),
itispossibletomanipulatethecomplexexponentialformoftheFourier
series into a trigonometric form that contains sin(0t) and cos(0t)
terms with corresponding real- c 1999 by CRC Press LLC
15. valued coefcients [1]. The trigonometric form of the
Fourier series for a real-valued signal s(t) is given by s(t) = n=0
bn cos(n0t) + n=1 cn sin(n0t) (1.6a) where 0 = 2/T . The bn and cn
are real-valued Fourier coefcients determined by FIGURE 1.5:
Periodic CT signal used in Fourier series example 2. FIGURE 1.6:
Fourier coefcients for example of Figure 1.5. b0 = (1/T ) T/2 T/2
s(t) dt bn = (2/T ) T/2 T/2 s(t) cos(n0t) dt, n = 1, 2, . . . ,
(1.6b) cn = (2/T ) T/2 T/2 s(t) sin(n0t) dt, n = 1, 2, . . . , An
arbitrary real-valued signal s(t) can be expressed as a sum of even
and odd components, s(t) = seven(t) + sodd(t), where seven(t) =
seven(t) and sodd(t) = sodd(t), and where seven(t) = [s(t) +
s(t)]/2 and sodd(t) = [s(t) s(t)]/2. For the trigonometric Fourier
series, it can be
shownthatseven(t)isrepresentedbythe(even)cosinetermsintheinniteseries,sodd(t)isrepresented
by the (odd) sine terms, and b0 is the DC level of the signal.
Therefore, if it can be determined by inspection that a signal has
DC level, or if it is even or odd, then the correct form of the
trigonometric c 1999 by CRC Press LLC
16. series can be chosen to simplify the analysis. For example,
it is easily seen that the signal shown in Fig. 1.5 is an even
signal with a zero DC level. Therefore it can be accurately
represented by the cosine series with bn = 2A sin(n/2)/(n/2), n =
1, 2, . . . , as illustrated in Fig. 1.6. In contrast, note that
the sawtooth waveform used in the previous example is an odd signal
with zero DC level; thus, it can be completely specied by the sine
terms of the trigonometric series. This result can be demonstrated
by pairing each positive frequency component from the exponential
series with its conjugate partner, i.e., cn = sin(n0t) = anejn0t +
anejn0t , whereby it is found that cn = 2A cos(n)/(n) for this
example. In general it is found that an = (bn jcn)/2 for n = 1, 2,
. . . , a0 = b0, and an = a n. The trigonometric Fourier series is
common in the signal processing literature because it replaces
complex coefcients with real ones and often results in a simpler
and more intuitive interpretation of the results. 1.2.3 Convergence
of the Fourier Series The Fourier series representation of a
periodic signal is an approximation that exhibits mean squared
convergence to the true signal. If s(t) is a periodic signal of
period T , and s (t) denotes the Fourier series approximation of
s(t), then s(t) and s (t) are equal in the mean square sense if MSE
= T/2 T/2 |s(t) s(t) |2 dt = 0 (1.7) Even with (1.7) satised, mean
square error (MSE) convergence does not mean that s(t) = s (t) at
every value of t. In particular, it is known that at values of t,
where s(t) is discontinuous, the Fourier series converges to the
average of the limiting values to the left and right of the
discontinuity. For example, if t0 is a point of discontinuity, then
s (t0) = [s(t 0 ) + s(t+ 0 )]/2, where s(t 0 ) and s(t+ 0 ) were
dened previously. (Note that at points of continuity, this
condition is also satised by the denition of continuity.) Because
the Dirichet conditions require that s(t) have at most a nite
number of points of discontinuity in one period, the set St , dened
as all values of t within one period where s(t) = s (t), contains a
nite number of points, and St is a set of measure zero in the
formal mathematical sense. Therefore, s(t) and its Fourier series
expansion s (t) are equal almost everywhere, and s(t) can be
considered identical to s (t) for the analysis of most practical
engineering problems. Convergence almost everywhere is satised only
in the limit as an innite number of terms are included in the
Fourier series expansion. If the innite series expansion of the
Fourier series is truncated to a nite number of terms, as it must
be in practical applications, then the approximation will exhibit
an oscillatory behavior around the discontinuity, known as the
Gibbs phenomenon [1]. Let sN (t) denote a truncated Fourier series
approximation of s(t), where only the terms in (1.5a) from n = N to
n = N are included if the complex Fourier series representation is
used, or where only the terms in (1.6a) from n = 0 to n = N are
included if the trigonometric form of the Fourier series is used.
It is well known that in the vicinity of a discontinuity at t0 the
Gibbs phenomenon causes sN (t) to be a poor approximation to s(t).
The peak magnitude of the Gibbs oscillation is 13% of the size of
the jump discontinuity s(t 0 ) s(t+ 0 ) regardless of the number of
terms used in the approximation. As N increases, the region that
contains the oscillation becomes more concentrated in the
neighborhood of the discontinuity, until, in the limit as N
approaches innity, the Gibbs oscillation is squeezed into a single
point of mismatch at t0. If s (t) is replaced by sN (t) in (1.7),
it is important to understand the behavior of the error MSEN as a
function of N, where MSEN = T/2 T/2 |s(t) sN (t)|2 dt (1.8) c 1999
by CRC Press LLC
17.
AnimportantpropertyoftheFourierseriesisthattheexponentialbasisfunctionsejn0t
(orsin(n0t) and cos(n0t) for the trigonometric form) for n = 0, 1,
2, . . . (or n = 0, 1, 2, . . . for the trigonometric form)
constitute an orthonormal set, i.e., tnk = 1 for n = k, and tnk = 0
for n = k, where tnk = (1/T ) T/2 T/2 (ejn0t )(ejk0t ) dt (1.9) As
terms are added to the Fourier series expansion, the orthogonality
of the basis functions guarantees that the error decreases in the
mean square sense, i.e., that MSEN monotonically decreases as N is
increased. Therefore,
apractitionercanproceedwiththecondencethatwhenapplyingFourierseries
analysis more terms are always better than fewer in terms of the
accuracy of the signal representations. 1.3 The Classical Fourier
Transform for Continuous Time Signals The periodicity constraint
imposed on the Fourier series representation can be removed by
taking the limits of (1.5a) and (1.5b) as the period T is increased
to innity. Some mathematical preliminaries are required so that the
results will be well dened after the limit is taken. It is
convenient to remove the (1/T ) factor in front of the integral by
multiplying (1.5b) through by T , and then replacing T an by an in
both (1.5a) and (1.5b). Because 0 = 2/T , as T increases to innity,
0 becomes innitesimallysmall, aconditionthatisdenotedbyreplacing0
with . Thefactor(1/T )in(1.5a) becomes ( /2). With these algebraic
manipulations and changes in notation (1.5a) and (1.5b) take on the
following form prior to taking the limit: s(t) = (1/2) n= anejn t
(1.10a) an = T/2 T/2 s(t)ejn t dt (1.10b)
ThenalstepinobtainingtheCTFouriertransformistotakethelimitofboth(1.10a)and(1.10b)
as T . In the limit the innite summation in (1.10a) becomes an
integral, becomes d, n becomes , and an becomes the CT Fourier
transform of s(t), denoted by S(j). The result is summarized by the
following transform pair, which is known throughout most of the
engineering literature as the classical CT Fourier transform
(CTFT): s(t) = (1/2) S(j)ejt d (1.11a) S(j) = s(t)ejt dt (1.11b)
Often (1.11a) is called the Fourier integral and (1.11b) is simply
called the Fourier transform. The relationship S(j) = F{s(t)}
denotes the Fourier transformation of s(t), where F{} is a symbolic
notation for the Fourier transform operator, and where becomes the
continuous frequency variable after the periodicity constraint is
removed. A transform pair s(t) S(j) represents a one-to- one
invertible mapping as long as s(t) satises conditions which
guarantee that the Fourier integral converges. From (1.11a) it is
easily seen that F{(t t 0)} = ejt0 , and from (1.11b) that F 1{2(
0)} = ej0t , so that (t t0) ejt0 and ej0t 2( 0) are valid Fourier
transform c 1999 by CRC Press LLC
18. pairs. Using these relationships it is easy to establish
the Fourier transforms of cos(0t) and sin(0t), as well as many
other useful waveforms that are encountered in common signal
analysis problems. A number of such transforms are shown in Table
1.1. The CTFT is useful in the analysis and design of CT systems,
i.e., systems that process CT signals. Fourier analysis is
particularly applicable to the design of CT lters which are
characterized by Fourier magnitude and phase spectra, i.e., by
|H(j)| and arg H(j), where H(j) is commonly called the frequency
response of the lter. For example, an ideal transmission channel is
one which passes a signal without distorting it. The signal may be
scaled by a real constant A and delayed by a xed time increment t0,
implying that the impulse response of an ideal channel is A(t t0),
and its corresponding frequency response is Aejt0 . Hence, the
frequency response of an ideal channel is specied by constant
amplitude for all frequencies, and a phase characteristic which is
linear function given by t0. 1.3.1 Properties of the Continuous
Time Fourier Transform The CTFT has many properties that make it
useful for the analysis and design of linear CT systems. Some of
the more useful properties are stated below. A more complete list
of the CTFT properties is given in Table 1.2. Proofs of these
properties can be found in [2] and [3]. In the following discus-
sion F{} denotes the Fourier transform operation, F1{} denotes the
inverse Fourier transform operation, and denotes the convolution
operation dened as f1(t) f2(t) = f1(t )f2() d 1. Linearity
(superposition): F{af1(t) + bf2(t)} = aF{f1(t)} + bF{f2(t)} (a and
b, complex constants) 2. Time shifting: F{f (t t0)} = ejt0 F{f (t)}
3. Frequency shifting: ej0t f (t) = F1{F(j( 0))} 4. Time domain
convolution: F{f1(t) f2(t)} = F{f1(t)}F{f2(t)} 5. Frequency domain
convolution: F{f1(t)f2(t)} = (1/2)F{f1(t)} F{f2(t)} 6. Time
differentiation: jF(j) = F{d(f (t))/dt} 7. Time integration: F{ t f
() d} = (1/j)F(j) + F(0)() The above properties are particularly
useful in CT system analysis and design, especially when the system
characteristics are easily specied in the frequency domain, as in
linear ltering. Note that properties 1, 6, and 7 are useful for
solving differential or integral equations. Property 4 provides the
basis for many signal processing algorithms because many systems
can be specied directly by their impulse or frequency response.
Property 3 is particularly useful in analyzing communication
systems in which different modulation formats are commonly used to
shift spectral energy to frequency bands that are appropriate for
the application. 1.3.2 Fourier Spectrum of the Continuous Time
Sampling Model Because the CT sampling model sa(t), given in (1.1),
is in its own right a CT signal, it is appropriate to apply the
CTFT to obtain an expression for the spectrum of the sampled
signal: F{sa(t)} = F n= s(t)(t nT ) = n= s(nT )ejT n (1.12) Because
the expression on the right-hand side of (1.12) is a function of
ejT it is customary to denote the transform as F(ejT ) = F{sa(t)}.
Later in the chapter this result is compared to the result of c
1999 by CRC Press LLC
19. TABLE 1.1 Some Basic CTFT Pairs Fourier Series Coefcients
Signal Fourier Transform (if periodic) + k= akejk0t 2 + k= ak(k0)
ak ej0t 2( + 0) a1 = 1 ak = 0, otherwise cos 0t [( 0) + ( + 0)] a1
= a1 = 1 2 ak = 0, otherwise sin 0t j [( 0) ( + 0)] a1 = a1 = 1 2j
ak = 0, otherwise x(t) = 1 2() a0 = 1, ak = 0, k = 0
hasthisFourierseriesrepresentationforany choice of T0 > 0
Periodic square wave x(t) = 1, |t| < T1 0, T1 < |t| T0 2 + k=
2 sin k0T1 k (k0) 0T1 sin c k0T1 = sin k0T1 k and x(t + T0) = x(t)
+ n= (t nT ) 2 T + k = 2k T ak = 1 T for all k x(t) = 1, |t| <
T1 0, |t| > T1 2T1 sin c T1 = 2 sin T1 W sin c Wt = sin Wt t X()
= 1, || < W 0, || > W (t) 1 u(t) 1 j + () (t t0) ejt0 eat
u(t), Re{a} > 0 1 a + j teat u(t), Re{a} > 0 1 (a + j)2 tn1
(n 1)! eat u(t), Re{a} > 0 1 (a + j)n c 1999 by CRC Press
LLC
20. TABLE 1.2 Properties of the CTFT Name If Ff (t) = F(j),
then Denition f (j) = f (t)ejt dt f (t) = 1 2 F(j)ejt d
Superposition F[af1(t) + bf2(t)] = aF1(j) + bF2(j) Simplication if:
(a) f (t) is even F(j) = 2 0 f (t) cos t dt (b) f (t) is odd F(j) =
2j 0 f (t) sin t dt Negative t Ff (t) = F(j) Scaling: (a) Time Ff
(at) = 1 |a| F j a (b) Magnitude Faf (t) = aF(j) Differentiation F
dn dtn f (t) = (j)nF(j) Integration F t f (x) dx = 1 j F(j) +
F(0)() Time shifting Ff (t a) = F(j)eja Modulation Ff (t)ej0t =
F[j( 0)] {Ff (t) cos 0t = 1 2 F[j( 0)] + F[j( + 0)]} {Ff (t) sin 0t
= 1 2 j[F[j( 0)] F[j( + 0)]} Time convolution F1[F1(j)F2(j)] =
f1()f2()f2(t ) d Frequency convolution F[f1(t)f2(t)] = 1 2
F1(j)F2[j()] d operating on the DT sampling model, namely s[n],
with the DT Fourier transform to illustrate that the two sampling
models have the same spectrum. 1.3.3 Fourier Transform of Periodic
Continuous Time Signals We saw earlier that a periodic CT signal
can be expressed in terms of its Fourier series. The CTFT can then
be applied to the Fourier series representation of s(t) to produce
a mathematical expression for the line spectrum characteristic of
periodic signals. F{s(t)} = F n= anejn0t = 2 n= an( n0) (1.13) The
spectrum is shown pictorially in Fig. 1.7. Note the similarity
between the spectral representation of Fig. 1.7 and the plot of the
Fourier coefcients in Fig. 1.4, which was heuristically interpreted
as a line spectrum. Figures 1.4 and 1.7 are different but
equivalent representations of the Fourier c 1999 by CRC Press
LLC
21. spectrum. Note that Fig. 1.4 is a DT representation of the
spectrum, while Fig. 1.7 is a CT model of the same spectrum. FIGURE
1.7: Spectrum of the Fourier series representation of s(t). 1.3.4
The Generalized Complex Fourier Transform The CTFT characterized by
(1.11a) and (1.11b) can be generalized by considering the variable
j to be the special case of u = + j with = 0, writing (1.11a) in
terms of u, and interpreting u as a complex frequency variable. The
resulting complex Fourier transform pair is given by (1.14a) and
(1.14b) s(t) = (1/2j) +j j S(u)ejut du (1.14a) S(u) = s(t)ejut dt
(1.14b)
Thesetofallvaluesofuforwhichtheintegralof(1.14b)convergesiscalledtheregionofconvergence
(ROC). Because the transform S(u) is dened only for values of u
within the ROC, the path of integration in (1.14a) must be dened by
so that the entire path lies within the ROC. In some literature
this transform pair is called the bilateral Laplace transform
because it is the same result obtained by including both the
negative and positive portions of the time axis in the classical
Laplace transform integral. [Note that in (1.14a) the complex
frequency variable was denoted by u rather than by the more common
s, in order to avoid confusion with earlier uses of s() as signal
notation.] The complex Fourier transform (bilateral Laplace
transform) is not often used in solving practical problems, but its
signicance lies in the fact that it is the most general form that
represents the point at which Fourier and Laplace transform
concepts become the same. Identifying this connection reinforces
the notion that Fourier and Laplace transform concepts are similar
because they are derived by placing different constraints on the
same general form. 1.4 The Discrete Time Fourier Transform The
discrete time Fourier transform (DTFT) can be obtained by using the
DT sampling model and considering the relationship obtained in
(1.12) to be the denition of the DTFT. Letting T = 1 so that the
sampling period is removed from the equations and the frequency
variable is replaced with c 1999 by CRC Press LLC
22. a normalized frequency = T , the DTFT pair is dened in
(1.15a). Note that in order to simplify notation it is not
customary to distinguish between and , but rather to rely on the
context of the discussion to determine whether refers to the
normalized (T = 1) or the unnormalized (T = 1) frequency variable.
S(ej ) = n= s[n]ej n (1.15a) s[n] = (1/2) S(ej )ejn d (1.15b) The
spectrum S(ej ) is periodic in with period 2. The fundamental
period in the range < , sometimes referred to as the baseband,
is the useful frequency range of the DT system because frequency
components in this range can be represented unambiguously in
sampled form (without aliasing error). In much of the signal
processing literature the explicit primed notation is omitted from
the frequency variable. However, the explicit primed notation will
be used throughout this section because the potential exists for
confusion when so many related Fourier concepts are discussed
within the same framework. By comparing (1.12) and (1.15a), and
noting that = T , it is established that F{sa(t)} = DTFT{s[n]}
(1.16) where s[n] = s(t)t=nT . This demonstrates that the spectrum
of sa(t), as calculated by the CT Fourier transform is identical to
the spectrum of s[n] as calculated by the DTFT. Therefore, although
sa(t) and s[n] are quite different sampling models, they are
equivalent in the sense that they have the same Fourier domain
representation. A list of common DTFT pairs is presented in Table
1.3. Just as the CT Fourier transform is useful in CT signal system
analysis and design, the DTFT is equally useful in the same
capacity for DT systems. It is indeed fortuitous that Fourier
transform theory can be extended in this way to apply to DT
systems. In the same way that the CT Fourier transform was found to
be a special case of the complex Fourier transform (or bilateral
Laplace transform), the DTFT is a special case of the bilateral
z-transform with z = ej t . The more general bilateral z-transform
is given by S(z) = n= s[n]zn (1.17a) s[n] = (1/2j) C S(z)zn1 dz
(1.17b) where C is a counterclockwise contour of integration which
is a closed path completely contained within the region of
convergence of S(z). Recall that the DTFT was obtained by taking
the CT Fourier transform of the CT sampling model represented by
sa(t). Similarly, the bilateral z-transform results by taking the
bilateral Laplace transform of sa(t). If the lower limit on the
summation of (1.17a) is taken to be n = 0, then (1.17a) and (1.17b)
become the one-sided z-transform, which is the DT equivalent of the
one-sided LT for CT signals. The hierarchical relationship among
these various concepts for DT systems is discussed later in this
chapter, where it will be shown that the family structure of the DT
family tree is identical to that of the CT family. For every CT
transform in the CT world there is an analogous DT transform in the
DT world, and vice versa. c 1999 by CRC Press LLC
23. TABLE 1.3 Some Basic DTFT Pairs Sequence Fourier Transform
1. [n] 1 2. [n n0] ejn0 3. 1 ( < n < ) k= 2( + 2k) 4. anu[n]
(|a| < 1) 1 1 aej 5. u[n] 1 1 ej + k= ( + 2k) 6. (n + 1)anu[n]
(|a| < 1) 1 (1 aej)2 7. r2 sin p(n + 1) sin p u[n] (|r| < 1)
1 1 2r cos pej + r2ej2 8. sin cn n Xej = 1, || < c 0, c < ||
9. x[n] 1, 0 n M 0, otherwise sin [(M + 1)/2] sin (/2) ejM/2 10.
ej0n k= 2( 0 + 2k) 11. cos(0n + ) k= [ej( 0 + 2k) + ej( + 0 + 2k)]
1.4.1 Properties of the Discrete Time Fourier Transform Because the
DTFT is a close relative of the classical CT Fourier transform it
should come as no surprise that many properties of the DTFT are
similar to those presented for the CT Fourier transform in the
previous section. In fact, for many of the properties presented
earlier an analogous property exists for the DTFT. The following
list parallels the list that was presented in the previous section
for the CT Fourier transform, to the extent that the same property
exists. A more complete list of DTFT pairs is given in Table 1.4.
(Note that the primed notation on is dropped in the following to
simplify the notation, and to be consistent with standard usage.)
1. Linearity (superposition): DTFT{af1[n] + bf2[n]} = aDTFT{f1[n]}
+ bDTFT{f2[n]} (a and b, complex constants) 2. Index shifting:
DTFT{f [n n0]} = ejn0 DTFT{f [n]} 3. Frequency shifting: ej0nf [n]
= DTFT1 {F(ej(0))} 4. Time domain convolution: DTFT{f1[n] f2[n]} =
DTFT{f1[n]}DTFT{f2[n]} 5. Frequencydomainconvolution:
DTFT{f1[n]f2[n]} = (1/2)DTFT{f1[n]}DTFT{f2[n]} 6. Frequency
differentiation: nf [n] = DTFT1 {dF(ej)/d}
Notethatthetime-differentiationandtime-integrationpropertiesoftheCTFTdonothaveanalogous
counterparts in the DTFT because time domain differentiation and
integration are not dened for DT c 1999 by CRC Press LLC
24. TABLE 1.4 Properties of the DTFT Sequence Fourier Transform
x[n] X(ej) y[n] Y(ej) 1. ax[n] + by[n] aX(ej) + bY(ej) 2. x[n nd ]
(nd an integer) ejnd X(ej) 3. ej0nx[n] X(ej(0)) 4. x[n] X(ej) if
x[n] is real X(ej) 5. nx[n] j dX(ej) d 6. x[n] y[n] X(ej)Y(ej) 7.
x[n]y[n] 1 2 x x X(ej )Y(ej()) d Parsevals Theorem 8. n= |x[n]|2 =
1 2 |X(ej)|2 d 9. n= x[n]y[n] = 1 2 inf X(ej)Y(ej) d signals. When
working with DT systems practitioners must often manipulate
difference equations in the frequency domain. For this purpose
property 1 and property 2 are very important. As with the CTFT,
property 4 is very important for DT systems because it allows
engineers to work with the frequency response of the system, in
order to achieve proper shaping of the input spectrum or to achieve
frequency selective ltering for noise reduction or signal
detection. Also, property 3 is useful for the analysis of
modulation and ltering operations common in both analog and digital
communication systems. The DTFT is dened so that the time domain is
discrete and the frequency domain is continuous. This is in
contrast to the CTFT that is dened to have continuous time and
continuous frequency domains. The mathematical dual of the DTFT
also exists, which is a transform pair that has a continuous time
domain and a discrete frequency domain. In fact, the dual concept
is really the same as the Fourier series for periodic CT signals
presented earlier in the chapter, as represented by (1.5a) and
(1.5b). However, the classical Fourier series arises from the
assumption that the CT signal is inherently periodic, as opposed to
the time domain becoming periodic by virtue of sampling the
spectrum of a continuous frequency (aperiodic time) function [8].
The dual of the DTFT, the discrete frequency Fourier transform
(DFFT), has been formulated and its properties tabulated as an
interesting and useful transform in its own right [5]. Although the
DFFT is similar in concept to the classical CT Fourier series, the
formal properties of the DFFT [5] serve to clarify the effects of
frequency domain sampling and time domain aliasing. These effects
are obscured in the classical treatment of the CT Fourier series
because the emphasis is on the inherent line spectrum that results
from time domain periodicity. The DFFT is useful for the analysis
and design of digital lters that are produced by frequency sampling
techniques. 1.4.2 Relationship between the Continuous and Discrete
Time Spectra Because DT signals often originate by sampling CT
signals, it is important to develop the relationship between the
original spectrum of the CT signal and the spectrum of the DT
signal that results. First, c 1999 by CRC Press LLC
25. the CTFT is applied to the CT sampling model, and the
properties listed above are used to produce the following result:
F{sa(t)} = F s(t) n= (t nT ) = (1/2)S(j) F n= (t nT ) (1.18) In
this section it is important to distinguish between and , so the
explicit primed notation is used in the following discussion where
needed for clarication. Because the sampling function (summation of
shifted impulses) on the right-hand side of the above equation is
periodic with period T it can be replaced with a CT Fourier series
expansion as follows: S(ejT ) = F{sa(t)} = (1/2)S(j) F n= (1/T
)ej(2/T )nt Applying the frequency domain convolution property of
the CTFT yields S(ejT ) = (1/2) n= S(j) (2/T )( (2/T )n) The result
is S(ejT ) = (1/T ) n= S(j[ (2/T )n]) = (1/T ) n= S(j[ ns]) (1.19a)
where s = (2/T ) is the sampling frequency expressed in radians per
second. An alternate form for the expression of (1.19a) is S(ej ) =
(1/T ) n= S(j[( n2)/T ]) (1.19b) where = T is the normalized DT
frequency axis expressed in radians. Note that S(ejT ) = S(ej )
consists of an innite number of replicas of the CT spectrum S(j),
positioned at intervals of (2/T ) on the axis (or at intervals of 2
on the axis), as illustrated in Fig. 1.8. Note that if S(j) is band
limited with a bandwidth c, and if T is chosen sufciently small so
that s > 2c, then the DT spectrum is a copy of S(j) (scaled by
1/T ) in the baseband. The limiting case of s = 2c is called the
Nyquist sampling frequency. Whenever a CT signal is sampled at or
above the Nyquist rate, no aliasing distortion occurs (i.e., the
baseband spectrum does not overlap with the higher-order replicas)
and the CT signal can be exactly recovered from its samples by
extracting the baseband spectrum of S(ej ) with an ideal low-pass
lter that recovers the original CT spectrum by removing all
spectral replicas outside the baseband and scaling the baseband by
a factor of T . 1.5 The Discrete Fourier Transform To obtain the
discrete Fourier transform (DFT) the continuous frequency domain of
the DTFT is sampled at N points uniformly spaced around the unit
circle in the z-plane, i.e., at the points c 1999 by CRC Press
LLC
26. FIGURE 1.8: Illustration of the relationship between the CT
and DT spectra. k = (2k/N), k = 0, 1, . . . , N 1. The result is
the DFT pair dened by (1.20a) and (1.20b). The signal s[n] is
either a nite length sequence of length N, or it is a periodic
sequence with period N. S[k] = N1 n=0 s[n]ej2kn/N k = 0, 1, . . . ,
N 1 (1.20a) s[n] = (1/N) N1 k=0 S[k]ej2kn/N n = 0, 1, . . . , N 1
(1.20b) Regardless of whether s[n] is a nite length or periodic
sequence, the DFT treats the N samples of s[n] as though they are
one period of a periodic sequence. This is an important feature of
the DFT, and one that must be handled properly in signal processing
to prevent the introduction of artifacts.
ImportantpropertiesoftheDFTaresummarizedinTable1.5.
Thenotation((k))N denotesk modulo N, and RN [n] is a rectangular
window such that RN [n] = 1 for n = 0, . . . , N 1, and RN [n] = 0
for n < 0 and n N. The transform relationship given by (1.20a)
and (1.20b) is also valid when s[n] and S[k] are periodic
sequences, each of period N. In this case n and k are permitted to
range over the complete set of real integers, and S[k] is referred
to as the discrete Fourier series (DFS). The DFS is developed by
some authors as a distinct transform pair in its own right [6].
Whether the DFT and the DFS are considered identical or distinct is
not very important in this discussion. The important point to be
emphasized here is that the DFT treats s[n] as though it were a
single period of a periodic sequence, and all signal processing
done with the DFT will inherit the consequences of this assumed
periodicity. 1.5.1 Properties of the Discrete Fourier Series Most
of the properties listed in Table 1.5 for the DFT are similar to
those of the z-transform and the DTFT, although some important
differences exist. For example, property 5 (time-shifting
property), holds for circular shifts of the nite length sequence
s[n], which is consistent with the notion that the DFT treats s[n]
as one period of a periodic sequence. Also, the multiplication of
two DFTs results in the circular convolution of the corresponding
DT sequences, as specied by property 7. This latter property is
quite different from the linear convolution property of the DTFT.
Circular convolution is the result of the assumed periodicity
discussed in the previous paragraph. Circular convolution is simply
a linear convolution of the periodic extensions of the nite
sequences being convolved, in which each of the nite sequences of
length N denes the structure of one period of the periodic
extensions. For example, suppose one wishes to implement a digital
lter with nite impulse response (FIR) c 1999 by CRC Press LLC
33. FIGURE 1.10: Relationships among CT Fourier concepts. of
the observation interval. Sampling causes a certain degree of
aliasing, although this effect can be minimized by sampling at a
high enough rate. Therefore, lengthening the observation interval
increasesthefundamentalresolutionlimit,
whiletakingmoresampleswithintheobservationinterval
minimizesaliasingdistortionandprovidesabetterdenition(moresamplepoints)ontheunderlying
spectrum. Padding the data with zeroes and computing a longer FFT
does give more frequency domain points (improved spectral
resolution), but it does not improve the fundamental limit, nor
does it alter the effects of aliasing error. The resolution limits
are established by the observation interval and the sampling rate.
No amount of zero padding can improve these basic limits. However,
zero padding is a useful tool for providing more spectral denition,
i.e., it allows a better view of the (distorted) spectrum that
results once the observation and sampling effects have occurred.
Leakage and the Picket Fence Effect An FFT with block length N can
accurately resolve only frequencies k = (2/N)k, k = 0, . . . , N 1
that are integer multiples of the fundamental 1 = (2/N). An analog
waveform that
issampledandsubjectedtospectralanalysismayhavefrequencycomponentsbetweentheharmonics.
For example, a component at frequencyk+1/2 = (2/N)(k+1/2) will
appear scattered throughout c 1999 by CRC Press LLC
34. TABLE 1.7 Common Window Functions Peak Minimum Side-Lobe
Stopband Amplitude Mainlobe Attenuation Name Function (dB) Width
(dB) Rectangular (n) = 1. 0 n N 1 13 4/N 21 Bartlett (n) = 2/N, 0 n
(N 1)/2 22n/N, (N 1)/2 n N 1 25 8/N 25 Hanning (n) = (1/2)[1
cos(2n/N)] 31 8/N 44 0 n N 1 43 8/N 53 Hamming (n) = 0.54 0.46
cos(2n/N), 43 8/N 53 0 n N 1 Backman (n) = 0.42 0.5 cos(2n/N) 57
12/N 74 + 0.08 cos(4n/N), 0 n N 1 the spectrum. The effect is
illustrated in Fig. 1.12 for a sinusoid that is observed through a
rectangular window and then sampled at N points. The picket fence
effect means that not all frequencies can be seen by the FFT.
Harmonic components are seen accurately, but other components slip
through the picket fence while their energy is leaked into the
harmonics. These effects produce artifacts in the spectral domain
that must be carefully monitored to assure that an accurate
spectrum is obtained from FFT processing. 1.7.2 Finite Impulse
Response Digital Filter Design
AcommonmethodfordesigningFIRdigitalltersisbyuseofwindowingandFFTanalysis.
Ingeneral, window designs can be carried out with the aid of a hand
calculator and a table of well-known window functions. Let h[n] be
the impulse response that corresponds to some desired frequency
response, H(ej). If H(ej) has sharp discontinuities, such as the
low-pass example shown in Fig. 1.13, then h[n] will represent an
innite impulse response (IIR) function. The objective is to time
limit h[n] in such a way as to not distort H(ej) any more than
necessary. If h[n] is simply truncated, a ripple (Gibbs phenomenon)
occurs around the discontinuities in the spectrum, resulting in a
distorted lter (Fig. 1.13). Suppose that w[n] is a window function
that time limits h[n] to create an FIR approximation, h [n]; i.e.,
h [n] = w[n]h[n]. Then if W(ej) is the DTFT of w[n], h [n] will
have a Fourier transform given by H (ej) = W(ej) H(ej), where
denotes convolution. Thus, the ripples in H (ej) result from the
sidelobes of W(ej). Ideally, W(ej) should be similar to an impulse
so that H (ej) is approximately equal to H(ej). Special Case. Let
h[n] = cos n0, for all n. Then h[n] = w[n] cos n0, and H (ej ) =
(1/2)W(ej(+0) ) + (1/2)W(ej(0) ) (1.28) as illustrated in Fig.
1.14. For this simple class, the center frequency of the bandpass
is controlled by 0, and both the shape of the bandpass and the
sidelobe structure are strictly determined by the choice of the
window. While this simple class of FIRs does not allow for very
exible designs, it is a simple technique for determining quite
useful low-pass, bandpass, and high-pass FIRs. General Case.
Specify an ideal frequency response, H(ej), and choose samples at
selected values of . Use a long inverse FFT of length N to nd h
[n], an approximation to h[n], where if N is the desired length of
the nal lter, then N N. Then use a carefully selected window to
truncate h [n] to obtain h[n] by letting h[n] = [n]h [n]. Finally,
use an FFT of length N to nd H (ej). If H (ej) is a satisfactory
approximation to H(ej), the design is nished. If not, choose a new
H(ej) or a new w[n] and repeat. Throughout the design procedure it
is important to choose N = kN, with k an integer that is typically
in the range of 4 to 10. Because this design technique is a c 1999
by CRC Press LLC
35. FIGURE 1.11: Relationships among DT concepts. trial and
error procedure, the quality of the result depends to some degree
on the skill and experience of the designer. Table 1.7 lists
several well-known window functions that are often useful for this
type of FIR lter design procedure. 1.7.3 Fourier Analysis of Ideal
and Practical Digital-to-Analog Conversion From the relationship
characterized by (1.19b) and illustrated in Fig. 1.8, CT signal
s(t) can be recovered from its samples by passing sa(t) through an
ideal lowpass lter that extracts only the baseband spectrum. The
ideal lowpass lter, shown in Fig. 1.15, is a zero-phase CT lter
whose magnitude response is a constant of value T in the range <
, and zero elsewhere. The impulse response of this reconstruction
lter is given by h(t) = T sinc((/T )t), where sincx = (sin x)/x.
Thereconstructioncanbeexpressedas s(t) = h(t)sa(t), which,
aftersomemathematical manipulation, yields the following classical
reconstruction formula s(t) = n= s(nT )sinc((/T )(t nT )) (1.29)
Note that the signal s(t) is exactly recovered from its samples
only if an innite number of terms is c 1999 by CRC Press LLC
36. FIGURE 1.12: Illustration of leakage and the picket-fence
effects. FIGURE 1.13: Gibbs effect in a low-pass lter caused by
truncating the impulse response. included in the summation of
(1.29). However, good approximation of s(t) can be obtained with
only a nite number of terms if the lowpass reconstruction lter h(t)
is modied to have a nite interval of support, i.e., if h(t) is
nonzero only over a nite time interval. The reconstruction formula
of (1.29) is an important result in that it represents the inverse
of the sampling operation. By this means Fourier transform theory
establishes that as long as CT signals are sampled at a sufciently
high rate, the information content contained in s(t) can be
represented and processed in either a CT or DT format. Fourier
sampling and reconstruction theory provides the theoretical
mechanism for translation between one format or the other without
loss of information. A CT signal s(t) can be perfectly recovered
from its samples using (1.29) as long as the original sampling rate
was high enough to satisfy the Nyquist sampling criterion, i.e., s
> 2B. If the sampling rate does not satisfy the Nyquist
criterion the adjacent periods of the analog spectrum will overlap,
causing a distorted spectrum. This effect, called aliasing
distortion, is rather serious because it cannot be corrected easily
once it has occurred. In general, an analog signal should always be
preltered with an CT low-pass lter prior to sampling so that
aliasing distortion does not occur. Figure 1.16 shows the frequency
response of a fth-order elliptic analog low-pass lter that meets
industry standards for preltering speech signals. These signals are
subsequently sampled at an 8-kHz sampling rate and transmitted
digitally across telephone channels. The band-pass ripple is less
than 0.01 dB from DC up to the frequency 3.4 kHz (too small to be
seen in Fig. 1.16), and the stopband c 1999 by CRC Press LLC
37. FIGURE 1.14: Design of a simple bandpass FIR lter by
windowing. FIGURE 1.15: Illustration of ideal reconstruction.
rejection reaches at least 32.0 dB at 4.6 kHz and remains below
this level throughout the stopband. Most practical systems use
digital-to-analog converters for reconstruction, which results in a
stair- case approximation to the true analog signal, i.e., s(t) =
n= s(nT ){u(t nT ) u[t (n + 1)]}, (1.30) where s(t) denotes the
reconstructed approximation tos(t), and u(t) denotes a CT unit step
function. The approximation s(t) is equivalent to a result obtained
by using an approximate reconstruction lter of the form Ha(j) = 2T
ejT/2 sin c(T/2) (1.31) The approximation s(t) is said to contain
sin x/x distortion, which occurs because Ha(j) is not an ideal
low-pass lter. Ha(j) distorts the signal by causing a droop near
the passband edge, as well as by passing high-frequency distortion
terms which leak through the sidelobes of Ha(j). Therefore, a
practical digital to analog converter is normally followed by an
analog postlter Hp(j) = H1 a (j), 0 || < /T 0, otherwise (1.32)
which compensates for the distortion and produces the correct s(t),
i.e., the correctly constructed CT output. Unfortunately, the
postlter Hp(j) cannot be implemented perfectly, and, therefore, the
actual reconstructed signal always contains some distortion in
practice that arises from errors in approximating the ideal
postlter. Figure 1.17 shows a digital processor, complete with
analog-to- digital and digital-to-analog converters, and the
accompanying analog pre- and postlters necessary for proper
operation. 1.8 Summary This chapter presented many different
Fourier transform concepts for both continuous time (CT) and
discrete time (DT) signals and systems. Emphasis was placed on
illustrating how these various c 1999 by CRC Press LLC
38. FIGURE1.16:
Afth-orderellipticanaloganti-aliasinglterusedinthetelecommunicationsindustry
with an 8-kHz sampling rate. FIGURE 1.17: Analog pre- and postlters
required at the analog to digital and digital to analog interfaces.
forms of the Fourier transform relate to one another, and how they
are all derived from more general complex transforms, the complex
Fourier (or bilateral Laplace) transform for CT, and the bilateral
z-transform for DT. It was shown that many of these transforms have
similar properties which are inherited from their parent forms, and
that a parallel hierarchy exists among Fourier transform concepts
in the CT and the DT worlds. Both CT and DT sampling models were
introduced as a means of representing sampled signals in these two
different worlds, and it was shown that the models are equivalent
by virtue of having the same Fourier spectra when transformed into
the Fourier domain with the appropriate Fourier transform. It was
shown how Fourier analysis properly characterizes the relationship
between the spectra of a CT signal and its DT counterpart obtained
by sampling. The classical reconstruction formula was obtained as
an outgrowth of this analysis. Finally, the discrete Fourier
transform (DFT), the backbone for much of modern digital signal
processing, was obtained from more classical forms of the Fourier
transform by simultaneously discretizing the time and frequency
domains. The DFT, together with the remarkable computational
efciency provided by the fast Fourier transform (FFT) algorithm,
has contributed to the resounding success that engineers and
scientists have experienced in applying digital signal processing
to many practical scientic problems. c 1999 by CRC Press LLC
39. References [1] VanValkenburg, M.E., Network Analysis, 3rd
ed., Englewood Cliffs, NJ: Prentice-Hall, 1974. [2] Oppenheim,
A.V., Willsky, A.S., and Young, I.T., Signals and Systems,
Englewood Cliffs, NJ: Prentice-Hall, 1983. [3] Bracewell, R.N., The
Fourier Transform, 2nd ed., New York: McGraw-Hill, 1986. [4]
Oppenheim, A.V. and Schafer, R.W., Discrete-Time Signal Processing,
Englewood Cliffs, NJ: Prentice-Hall, 1989. [5] Jenkins, W.K. and
Desai, M.D., The discrete-frequency Fourier transform, IEEE Trans.
Circuits Syst., vol. CAS-33, no. 7, pp. 732734, July 1986. [6]
Oppenheim, A.V. and Schafer, R.W., Digital Signal Processing,
Englewood Cliffs, NJ: Prentice- Hall, 1975. [7] Blahut, R.E., Fast
Algorithms for Digital Signal Processing, Reading, MA:
Addison-Wesley, 1985. [8] Deller, J.R., Jr., Tom, Dick, and Mary
discover the DFT, IEEE Signal Processing Mag., vol. 11, no. 2, pp.
3650, Apr. 1994. [9] Burrus, C.S. and Parks, T.W., DFT/FFT and
Convolution Algorithms, New York: John Wiley and Sons, 1985. [10]
Brigham, E.O., The Fast Fourier Transform, Englewood Cliffs, NJ:
Prentice-Hall, 1974. c 1999 by CRC Press LLC
40. 2 Ordinary Linear Differential and Difference Equations
B.P. Lathi California State University, Sacramento 2.1 Differential
Equations Classical Solution Method of Convolution 2.2 Difference
Equations Initial Conditions and Iterative Solution Classical
Solution Method of Convolution References 2.1 Differential
Equations A function containing variables and their derivatives is
called a differential expression, and an equation involving
differential expressions is called adifferential equation.
Adifferential equation is an ordinary differential equation if it
contains only one independent variable; it is a partial
differential equation if it contains more than one independent
variable. We shall deal here only with ordinary differential
equations. In the mathematical texts, the independent variable is
generally x, which can be anything such as time, distance,
velocity, pressure, and so on. In most of the applications in
control systems, the independent variable is time. For this reason
we shall use here independent variable t for time, although it can
stand for any other variable as well. The following equation d2y
dt2 4 + 3 dy dt + 5y2 (t) = sin t is an ordinary differential
equation of second order because the highest derivative is of the
second order. An nth-order differential equation is linear if it is
of the form an(t)dny dtn + an1(t)dn1y dtn1 + + a1(t)dy dt +
a0(t)y(t) = r(t) (2.1) where the coefcients ai(t) are not functions
of y(t). If these coefcients (ai) are constants, the equation is
linear with constant coefcients. Many engineering (as well as
nonengineering) systems can be modeled by these equations. Systems
modeled by these equations are known as linear time- invariant
(LTI) systems. In this chapter we shall deal exclusively with
linear differential equations with constant coefcients. Certain
other forms of differential equations are dealt with elsewhere in
this volume. c 1999 by CRC Press LLC
41. Role of Auxiliary Conditions in Solution of Differential
Equations We now show that a differential equation does not, in
general, have a unique solution unless some additional constraints
(or conditions) on the solution are known. This fact should not
come as a surprise. A function y(t) has a unique derivative dy/dt,
but for a given derivative dy/dt there are innite possible
functions y(t). If we are given dy/dt, it is impossible to
determine y(t) uniquely unless an additional piece of information
about y(t) is given. For example, the solution of a differential
equation dy dt = 2 (2.2) obtained by integrating both sides of the
equation is y(t) = 2t + c (2.3) for any value of c. Equation 2.2
species a function whose slope is 2 for all t. Any straight line
with a slope of 2 satises this equation. Clearly the solution is
not unique, but if we place an additional constraint on the
solution y(t), then we specify a unique solution. For example,
suppose we require that y(0) = 5; then out of all the possible
solutions available, only one function has a slope of 2 and an
intercept with the vertical axis at 5. By setting t = 0 in Equation
2.3 and substituting y(0) = 5 in the same equation, we obtain y(0)
= 5 = c and y(t) = 2t + 5 which is the unique solution satisfying
both Equation 2.2 and the constraint y(0) = 5. In conclusion,
differentiation is an irreversible operation during which certain
information is lost. To reverse this operation, one piece of
information about y(t) must be provided to restore the original
y(t). Usingasimilarargument, wecanshowthat, givend2y/dt2,
wecandeterminey(t)uniquelyonly if two additional pieces of
information (constraints) about y(t) are given. In general, to
determine y(t) uniquely from its nth derivative, we need n
additional pieces of information (constraints) about y(t). These
constraints are also called auxiliary conditions. When these
conditions are given at t = 0, they are called initial conditions.
We discuss here two systematic procedures for solving linear
differential equations of the form in Eq. 2.1. The rst method is
the classical method, which is relatively simple, but restricted to
a certain class of inputs. The second method (the convolution
method) is general and is applicable to all types of inputs. A
third method (Laplace transform) is discussed elsewhere in this
volume. Both the methods discussed here are classied as time-domain
methods because with these methods we are able to solve the above
equation directly, using t as the independent variable. The method
of Laplace transform (also known as the frequency-domain method),
on the other hand, requires transformation of variable t into a
frequency variable s. In engineering applications, the form of
linear differential equation that occurs most commonly is given by
dny dtn + an1 dn1y dtn1 + + a1 dy dt + a0y(t) = bm dmf dtm + bm1
dm1f dtm1 + + b1 df dt + b0f (t) (2.4a) where all the coefcients ai
and bi are constants. Using operational notation D to represent
d/dt, this equation can be expressed as (Dn + an1Dn1 + + a1D +
a0)y(t) = (bmDm + bm1Dm1 + + b1D + b0)f (t) (2.4b) c 1999 by CRC
Press LLC
42. or Q(D)y(t) = P(D)f (t) (2.4c) where the polynomials Q(D)
and P(D), respectively, are Q(D) = Dn + an1Dn1 + + a1D + a0 P(D) =
bmDm + bm1Dm1 + + b1D + b0 Observe that this equation is of the
form of Eq. 2.1, where r(t) is in the form of a linear combination
of f (t) and its derivatives. In this equation, y(t) represents an
output variable, and f (t) represents an input variable of an LTI
system. Theoretically, the powers m and n in the above equations
can take on any value. Practical noise considerations, however,
require [1] m n. 2.1.1 Classical Solution When f (t) 0, Eq. 2.4a is
known as the homogeneous (or complementary) equation. We shall rst
solve the homogeneous equation. Let the solution of the homogeneous
equation be yc(t), that is, Q(D)yc(t) = 0 or (Dn + an1Dn1 + + a1D +
a0)yc(t) = 0 We rst show that if yp(t) is the solution of Eq. 2.4a,
then yc(t) + yp(t) is also its solution. This follows from the fact
that Q(D)yc(t) = 0 If yp(t) is the solution of Eq. 2.4a, then
Q(D)yp(t) = P(D)f (t) Addition of these two equations yields Q(D)
yc(t) + yp(t) = P(D)f (t) Thus, yc(t) + yp(t) satises Eq. 2.4a and
therefore is the general solution of Eq. 2.4a. We call yc(t) the
complementary solution and yp(t) the particular solution. In system
analysis parlance, these components are called the natural response
and the forced response, respectively. Complementary Solution (The
Natural Response) The complementary solution yc(t) is the solution
of Q(D)yc(t) = 0 (2.5a) or Dn + an1Dn1 + + a1D + a0 yc(t) = 0
(2.5b) A solution to this equation can be found in a systematic and
formal way. However, we will take a short cut by using heuristic
reasoning. Equation 2.5ab shows that a linear combination of yc(t)
and c 1999 by CRC Press LLC
43. its n successive derivatives is zero, not at some values of
t, but for all t. This is possible if and only if yc(t) and all its
n successive derivatives are of the same form. Otherwise their sum
can never add to zero for all values of t. We know that only an
exponential function et has this property. So let us assume that
yc(t) = cet is a solution to Eq. 2.5ab. Now Dyc(t) = dyc dt = cet
D2 yc(t) = d2yc dt2 = c2 et Dn yc(t) = dnyc dtn = cn et
Substituting these results in Eq. 2.5ab, we obtain c n + an1n1 + +
a1 + a0 et = 0 For a nontrivial solution of this equation, n +
an1n1 + + a1 + a0 = 0 (2.6a) This result means that cet is indeed a
solution of Eq. 2.5a provided that satises Eq. 2.6aa. Note that the
polynomial in Eq. 2.6aa is identical to the polynomial Q(D) in Eq.
2.5ab, with replacing D. Therefore, Eq. 2.6aa can be expressed as
Q() = 0 (2.6b) When Q() is expressed in factorized form, Eq. 2.6ab
can be represented as Q() = ( 1)( 2) ( n) = 0 (2.6c) Clearly has n
solutions: 1, 2, . . ., n. Consequently, Eq. 2.5a has n possible
solutions: c1e1t , c2e2t , . . . , cnent , with c1, c2, . . . , cn
as arbitrary constants. We can readily show that a general solution
is given by the sum of these n solutions,1 so that yc(t) = c1e1t +
c2e2t + + cnent (2.7) 1To prove this fact, assume that y1(t),
y2(t), . . ., yn(t) are all solutions of Eq. 2.5a. Then Q(D)y1(t) =
0 Q(D)y2(t) = 0 Q(D)yn(t) = 0 Multiplying these equations by c1,
c2, . . . , cn, respectively, and adding them together yields Q(D)
c1y1(t) + c2y2(t) + + cnyn(t) = 0 This result shows that c1y1(t) +
c2y2(t) + + cnyn(t) is also a solution of the homogeneous Eq. 2.5a.
c 1999 by CRC Press LLC
44. where c1, c2, . . . , cn are arbitrary constants determined
by n constraints (the auxiliary conditions) on the solution. The
polynomial Q() is known as the characteristic polynomial. The
equation Q() = 0 (2.8) is called the characteristic or auxiliary
equation. From Eq. 2.6ac, it is clear that 1, 2, . . ., n are the
roots of the characteristic equation; consequently, they are called
the characteristic roots. The terms characteristic values,
eigenvalues, and natural frequencies are also used for
characteristic roots.2 The exponentials eit (i = 1, 2, . . . , n)
in the complementary solution are the characteristic modes (also
known as modes or natural modes). There is a characteristic mode
for each characteristic root, and the complementary solution is a
linear combination of the characteristic modes. Repeated Roots The
solution of Eq. 2.5a as given in Eq. 2.7 assumes that the n
characteristic roots 1, 2, . . . , n are distinct. If there are
repeated roots (same root occurring more than once), the form of
the solution is modied slightly. By direct substitution we can show
that the solution of the equation (D )2 yc(t) = 0 is given by yc(t)
= (c1 + c2t)et In this case the root repeats twice. Observe that
the characteristic modes in this case are et and tet . Continuing
this pattern, we can show that for the differential equation (D
)ryc(t) = 0 (2.9) the characteristic modes are et , tet , t2et , .
. . , tr1et , and the solution is yc(t) = c1 + c2t + + crtr1 et
(2.10) Consequently, for a characteristic polynomial Q() = ( 1)r (
r+1) ( n) the characteristic modes are e1t , te1t , . . . , tr1et ,
er+1t , . . . , ent . and the complementary solution is yc(t) = (c1
+ c2t + + crtr1 )e1t + cr+1er+1t + + cnent Particular Solution (The
Forced Response): Method of Undetermined Coefcients The particular
solution yp(t) is the solution of Q(D)yp(t) = P(D)f (t) (2.11) It
is a relatively simple task to determine yp(t) when the input f (t)
is such that it yields only a nite number of independent
derivatives. Inputs having the form et or tr fall into this
category. For example, et has only one independent derivative; the
repeated differentiation of et yields the same form, that is, et .
Similarly, the repeated differentiation of tr yields only r
independent derivatives. 2The term eigenvalue is German for
characteristic value. c 1999 by CRC Press LLC
45. The particular solution to such an input can be expressed
as a linear combination of the input and its independent
derivatives. Consider, for example, the input f (t) = at2 + bt + c.
The successive derivatives of this input are 2at + b and 2a. In
this case, the input has only two independent derivatives.
Therefore the particular solution can be assumed to be a linear
combination of f (t) and its two derivatives. The suitable form for
yp(t) in this case is therefore yp(t) = 2t2 + 1t + 0 The
undetermined coefcients 0, 1, and 2 are determined by substituting
this expression for yp(t) in Eq. 2.11 and then equating coefcients
of similar terms on both sides of the resulting expression.
Although this method can be used only for inputs with a nite number
of derivatives, this class of inputs includes a wide variety of the
most commonly encountered signals in practice. Table 2.1 shows a
variety of such inputs and the form of the particular solution
corresponding to each input. We shall demonstrate this procedure
with an example. TABLE 2.1 Inputf (t) Forced Response 1. et = i (i
= 1, 2, et , n) 2. et = i tet 3. k (a constant) (a constant) 4. cos
(t + ) cos (t + ) 5. tr + r1tr1 + (r tr + r1tr1 + + 1t + 0 et + 1t
+ 0)et Note: By denition, yp(t) cannot have any characteristic mode
terms. If any term p(t) shown in the right-hand column for the
particular solution is also a characteristic mode, the correct form
of the forced response must be modied to tip(t), where i is the
smallest possible integer that can be used and still can prevent
tip(t) from having characteristic mode term. For example, when the
input is et , the forced response (right-hand column) has the form
et . But if et happens to be a characteristic mode, the correct
form of the particular solution is tet (see Pair 2). If tet also
happens to be characteristic mode, the correct form of the
particular solution is t2et , and so on. EXAMPLE 2.1: Solve the
differential equation D2 + 3D + 2 y(t) = Df (t) (2.12) if the input
f (t) = t2 + 5t + 3 and the initial conditions are y(0+) = 2 and
y(0+) = 3. The characteristic polynomial is 2 + 3 + 2 = ( + 1)( +
2) Therefore the characteristic modes are et and e2t . The
complementary solution is a linear com- bination of these modes, so
that yc(t) = c1et + c2e2t t 0 c 1999 by CRC Press LLC
46. Here the arbitrary constants c1 and c2 must be determined
from the given initial conditions. The particular solution to the
input t2 + 5t + 3 is found from Table 2.1 (Pair 5 with = 0) to be
yp(t) = 2t2 + 1t + 0 Moreover, yp(t) satises Eq. 2.11, that is, D2
+ 3D + 2 yp(t) = Df (t) (2.13) Now Dyp(t) = d dt 2t2 + 1t + 0 = 22t
+ 1 D2 yp(t) = d2 dt2 2t2 + 1t + 0 = 22 and Df (t) = d dt t2 + 5t +
3 = 2t + 5 Substituting these results in Eq. 2.13 yields 22 + 3(22t
+ 1) + 2(2t2 + 1t + 0) = 2t + 5 or 22t2 + (21 + 62)t + (20 + 31 +
22) = 2t + 5 Equating coefcients of similar powers on both sides of
this expression yields 22 = 0 21 + 62 = 2 20 + 31 + 22 = 5 Solving
these three equations for their unknowns, we obtain 0 = 1, 1 = 1,
and 2 = 0. Therefore, yp(t) = t + 1 t > 0 The total solution
y(t) is the sum of the complementary and particular solutions.
Therefore, y(t) = yc(t) + yp(t) = c1et + c2e2t + t + 1 t > 0 so
that y(t) = c1et 2c2e2t + 1 Setting t = 0 and substituting the
given initial conditions y(0) = 2 and y(0) = 3 in these equations,
we have 2 = c1 + c2 + 1 3 = c1 2c2 + 1 The solution to these two
simultaneous equations is c1 = 4 and c2 = 3. Therefore, y(t) = 4et
3e2t + t + 1 t 0 c 1999 by CRC Press LLC