Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Time Delays in C!"# Systems co$se notes ( wint% 2019/2020 )
Le!id M&kin Faculty of Mechanical Engine%ing
Techni!—IIT
draft, April 14, 2020
ii
Contents
Preface vii
Nomenclature ix
1 Systems with Time Delays 1
1.1 Delay elements and their dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Delay in discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Delay in continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Interactions of delays with other dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Input and output delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Internal delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 General interconnections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Finite-dimensional approximations of the delay element . . . . . . . . . . . . . . . . . . . 12
1.3.1 Pade approximant of e��s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 Stability Analysis 19
2.1 Modal methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Characteristic function of delay-differential equations . . . . . . . . . . . . . . . . 19
2.1.2 Asymptotic root properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.3 Stability and roots of characteristic function . . . . . . . . . . . . . . . . . . . . . 24
2.1.4 Nyquist stability criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.5 Delay sweeping (direct method of Walton–Marshall) . . . . . . . . . . . . . . . . 27
2.1.6 Bilinear (Rekasius) transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Lyapunov’s direct method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 Ordinary differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.2 Delay-differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3 Stabilization of Time-Delay Systems 39
3.1 Stabilization of FOPTD systems by fixed-structure controllers . . . . . . . . . . . . . . . 39
3.1.1 Stabilizing PI controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.2 Stabilizing PD controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Problem-oriented controller architectures: historical developments . . . . . . . . . . . . . 43
3.2.1 Dead-time compensation: Smith predictor and its modifications . . . . . . . . . . 43
3.2.2 Finite spectrum assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.3 Kwon–Pearson–Artstein reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.4 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Problem-oriented controller architectures: control-theoretic insight . . . . . . . . . . . . . 49
3.3.1 Gaining insight via discrete-time systems: state feedback . . . . . . . . . . . . . . 49
iii
iv CONTENTS
3.3.2 Gaining insight via discrete-time systems: output feedback . . . . . . . . . . . . . 52
3.3.3 Intermezzo: Fiagbedzi–Pearson reduction for systems with internal delays . . . . 53
3.4 Loop shifting and all stabilizing controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4.1 Internal stability and loop shifting . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4.2 Preliminary: truncation and completion operators . . . . . . . . . . . . . . . . . . 60
3.4.3 Loop shifting for dead-time systems . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.4.4 Potential extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.5 Delay as a constraint: extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 Performance of Time-Delay Systems 67
4.1 Standard H2 and H1 problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 State-space formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1.2 Design case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.2 H2 design for dead-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2.1 Extraction of optimal dead-time controllers . . . . . . . . . . . . . . . . . . . . . 74
4.2.2 Loop shifting solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.3 Design case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.4 Extensions to systems with multiple loop delays . . . . . . . . . . . . . . . . . . . 79
4.3 H1 design for dead-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3.1 Extraction of -suboptimal dead-time controllers . . . . . . . . . . . . . . . . . . 82
4.3.2 Loop shifting approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.4 Tuning industrial controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5 Implementation of DTC-based Controllers 89
5.1 General observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2 Implementation via reset mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.3 Rational approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.1 Naıve Pade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.2 Pade with interpolation constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3.3 Direct Pade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3.4 Approach of Partington–Makila . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.4 Lumped-delay approximations (LDA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.4.1 Naıve use of Newton–Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . . 96
5.4.2 Proper use of Newton–Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . . 97
5.4.3 Beyond Newton–Cotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.5 Coda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6 Robustness to Delay Uncertainty 103
6.1 Delay margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1.1 Bounds on the achievable delay margin . . . . . . . . . . . . . . . . . . . . . . . . 104
6.1.2 Delay margins of DTC-based loops: case study and general considerations . . . . 106
6.2 Embedding uncertain delays into less structured uncertainty classes . . . . . . . . . . . . . 109
6.2.1 Underlying idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2.2 Preliminary: robust stability with respect to norm-bounded uncertainty . . . . . . 110
6.2.3 Covering models for the uncertain delay element . . . . . . . . . . . . . . . . . . 113
6.2.4 Case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.2.5 Time-varying delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.2.6 Beyond simple coverings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
CONTENTS v
6.3 Analysis based on Lyapunov–Krasovskii methods . . . . . . . . . . . . . . . . . . . . . . 123
7 Exploiting Delays 127
7.1 Dead-beat open-loop control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1.1 Posicast control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1.2 Generating continuous-time FIR responses by a chain of delays . . . . . . . . . . 129
7.1.3 Input shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.1.4 Time-optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.1.5 Generating continuous-time FIR responses by general FIR systems . . . . . . . . 134
7.2 Preview control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.3 Stabilizing delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.4 Delays in the regulator problem: repetitive control . . . . . . . . . . . . . . . . . . . . . . 142
A Background on Linear Algebra 147
A.1 Schur complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
A.2 Sign-definite matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
A.3 Linear matrix equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
B Background on Linear Systems 151
B.1 Signals and systems in time domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
B.1.1 Continuous-time signals and systems . . . . . . . . . . . . . . . . . . . . . . . . . 151
B.1.2 Discrete-time signals and systems . . . . . . . . . . . . . . . . . . . . . . . . . . 153
B.2 Signals and systems in transformed domains . . . . . . . . . . . . . . . . . . . . . . . . . 154
B.3 State-space techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Bibliography 155
Index 161
vi CONTENTS
Preface
T IME DELAYS are ubiquitous in control applications. They represent mass and heat transport phenom-
ena, computation and communication time lags, many effects of unmodeled high-frequency dynam-
ics, et cetera. Dynamics of continuous-time systems involving delays are intrinsically infinite dimensional,
which complicates their analysis and associated control design methods. In many situations, delays have
negative effects on the stability of control systems and impose severe limitations on their attainable perfor-
mance. These factors suggest that understanding time-delay systems and corresponding control analysis
and design methods is of vital importance.
These notes are intended to be an introduction to the realm of time-delay control systems. Their main
emphasis is laid on the linear time-invariant (LTI) setting and input and / or output delays. The reason
is twofold. First, this class, dubbed dead-time systems, is of great importance in applications, where a
major harm is caused by loop delays. Second, these systems constitute the best understood class of time-
delay systems, with plenty of rigorous, yet still transparent and intuitive, analysis and design methods
available. Therefore, dead-time systems are a convenient class of time-delay systems, on which concepts
can be explained without the need to dig into overly convoluted technicalities. Still, many ideas behind
the studied systems are generic and extendible to more general settings.
Two aspects of the control of time-delay systems are highlighted throughout the text. The first one is
prominence given to dead-time compensation (DTC) methods as the control architecture in the context of
time-delay systems. I am convinced—and hope that the text conveys this opinion—that DTC is intrinsic
to delayed dynamics and is a natural extension of classical concepts of state feedback and state observa-
tion. As such, quite a lot of space is devoted to motivating the DTC structure, its use in various control
and estimation problems, as well as to related implementation issues. The second peculiarity is that the
presentation is not dominated by stability analyses. Stability requirements comprise a compulsory part of
requirements to control systems, of course. But the stabilization is hardly ever the ultimate goal of control
design. Control is about imposing desired behaviors on controlled systems, reducing the sensitivity to
disturbances on them, and so on. These aspects are extensively discussed in the notes.
This is an engineering text, so first and foremost it aims at developing an engineering insight into the
impact of delays on control systems and at exploiting the structure of the delay element in various analysis
and design situations. As a result—whether this is a welcome outcome or not depends on viewpoint—the
notes are less concerned with such apparently fascinating issues as associated initial value problems, the
smoothness of solutions, clustering closed-loop poles for systems with multiple incommensurate delays,
and so on. Also, the math is not always self contained, some technical results are presented without proofs.
Nonetheless, reasonable levels of rigorousness and self-containment are endeavored (although with only
a partial success).
Haifa (32.7746,35.0230) LEONID MIRKIN
October, 2019
vii
viii PREFACE
Nomenclature
N set of positive integers (natural numbers)
Z set of integers
ZC set of nonnegative integers
Z� set of non-positive integers, Z� D Z n N
Zi1::i2integer interval from i1 to and including i2, i.e. Zi1::i2
´ fi 2 Z j i1 � i � i2gR set of real numbers, R D .�1;1/
RC set of nonnegative real numbers, RC D Œ0;1/
R� set of non-positive real numbers, R� D .1; 0�
jR set of pure imaginary numbers
C set of complex numbers
Re´ the real part of ´ 2 C
Im ´ the imaginary part of ´ 2 C
C˛ open right half-plain, to the right of ˛ 2 R, i.e. C˛ ´ f´ 2 C j Re ´ > ˛gxC˛ closed right half-plain, to the right of ˛ 2 R, i.e. xC˛ ´ f´ 2 C j Re ´ � ˛gT unit circle, T ´ f´ 2 C j j´j D 1gD interior of T (open unit disk), D ´ f´ 2 C j j´j < 1gxD closed unit disk, xD ´ f´ 2 C j j´j � 1g D D [ T
F generic field, frequently used as an alias of either R or C
Cp�m.I/ class of continuous functions I ! Fp�m for I � R (it is denoted Cp.I/ ifm D 1 and C.I/
if the dimensions are irrelevant or clear from the context)
Lp�m2 .I/ Lebesgue space of square integrable functions I ! F
p�m (or L2.I/)
Lp�m2C .R/ space of square integrable functions R ! F
p�m vanishing in R n RC (or L2C.R/)
Lp�m2� .R/ space of square integrable functions R ! F p�m vanishing in R n R� (or L2�.R/)
`p�m2 .I/ space of square summable functions I ! F
p�m for I � Z (or `2.I/)
`p�m2C .Z/ space of square summable functions Z ! F
p�m vanishing in Z n ZC (or `2C.Z/)
`p�m2� .Z/ space of square summable functions Z ! F
p�m vanishing in ZC (or `2�.Z/)
Lp�m1 .I/ space of absolute integrable functions I ! F
p�m (or L1.I/)
Lp�m1 .I/ space of essentially bounded functions I ! F
p�m (or L1.I/)
Hp�m1 .A/ Hardy space of holomorphic and bounded functions A ! F
p�m for some A � C (orH1)
Mxt .s/ finite-window history of x at time t , Mxt .s/ ´ x.t C s/ for all s 2 Œ��; 0� and some � > 0
ix
x NOMENCLATURE
1I.t / (continuous-time) indicator of a set I � R, 1I.t / D�
1 if t 2 I
0 otherwise1.t / unit step, 1.t / ´ 1RC
.t /
ı.t / Dirac delta function
1IŒi � (discrete-time) indicator of a set I � Z, 1IŒi � D�
1 if i 2 I
0 otherwise1Œi � unit step, 1Œi � ´ 1ZC
Œi �
ıŒi � unit pulse at i D 0, ıŒi � D�
1 if i D 00 otherwise
ei the i th standard basis in Fn; e1 ´
�
1 0 0 � � � 0�0
, e2 ´�
0 1 0 � � � 0�0
, et cetera
In n � n identity matrix (just I if the dimension is irrelevant)
M 0 transpose of a matrix M 2 Rn�m / complex-conjugate transpose of a matrix M 2 C
n�m
�i.M/ i th eigenvalue of a matrix M 2 Fn�n
spec.M/ spectrum of a matrix M 2 Fn�n, i.e. the set of all its eigenvalues
�.M/ spectral radius of a matrix M 2 Fn�n, �.M/ D maxfj�1.M/j; : : : ; j�n.M/jg
x�.M/ the maximum singular value of a matrix M 2 Fp�m
�.M/ the minimum singular value of a matrix M 2 Fp�m
tr.M/ trace of a matrix M 2 Fn�n, tr.M/ D
PniD1 mi i D
PniD1 �i.M/
kMk spectral norm of M 2 Fn�m, kMk2 ´ �.M 0M/ D �.MM 0/
kMkF Frobenius norm of M 2 F n�m, kMk2F ´ tr.M 0M/ D tr.MM 0/ D
PniD1
Pmj D1jmij j2
diagfMig block-diagonal matrix with Mi on its diagonal, i.e. diagfMig ´
2
4
M1 0: : :
0 Mk
3
5
sign a sign of a 2 R, i.e. sign a D 1 if a > 0, sign a D �1 if a < 0, and sign a D 0 if a D 0
degP.s/ degree of a polynomial P.s/
lcf left coprime factorization over H1, like G D QM�1 QNrcf right coprime factorization over H1, like G D NM�1
Fl.G;K/ lower linear fractional transformation, Fl.G;K/ D G11 CG12K.I �G22K/�1G21
Fu.G;K/ upper linear fractional transformation, Fu.G;K/ D G22 CG21K.I �G11K/�1G12
G ? QG Redheffer star product, G ? QG D"
Fl.G; QG11/ G12.I � QG11G22/�1 QG12
QG21.I �G22QG11/
�1G21 Fu. QG;G22/
#
Chapter 1
Systems with Time Delays
LATENCY is an intrinsic part of mass and information transfer. After all, no mass / information can
travel faster than the speed of light. Information processing takes time as well. Hence, every control
system should take potential latencies into account. This chapter introduces the delay element, which is
the basic module describing latencies, and discusses its fundamental properties, in both continuous and
discrete times, and effects on finite-dimensional dynamics.
1.1 Delay elements and their dynamics
1.1.1 Delay in discrete time
Although we are mostly concerned with continuous-time systems, throughout this text we occasionally
use their discrete-time counterparts to gain insight into underlying ideas. This is because the dynamics
of the delay element are more conventional in discrete time, which facilitates grasping ideas without the
need to dig into advanced mathematical motions.
With this logic in mind, we start with defining the discrete-time delay element as a system xD� W u 7! y,
acting as
yŒt � D uŒt � �� i.e. xD� uyt0 t !t0 t0 C � t ! (1.1)
for some � 2 N, called the delay, and all u W ZC ! Rm. This is an ordinary linear shift-invariant (LSI)
causal system, whose impulse response d� Œt � D ıŒt � ��Im and the .�m/-order transfer function, which is
the ´-transform of d� ,
xD� .´/ D 1
´�Im D
2
666664
0 Im � � � 0 0:::
:::: : :
::::::
0 0 � � � Im 0
0 0 � � � 0 Im
Im 0 0 0 0
3
777775
: (1.2)
The chosen state-space realization above is in the canonical companion form and is one of many possibil-
ities, of course. Another route to end up with this realization is to construct the state vector of xD� first.
This can be done via the interpretation of the state vector as a memory accumulator. It is readily seen that
given an arbitrary time instance t � 0, the knowledge of uŒt C s� for all s 2 Z��::�1 is what we need to
determine the present and future values of y given the inputs from t on. The .�m/-dimensional vector
xŒt � ´
2
64
uŒt � ��:::
uŒt � 1�
3
75 (1.3)
1
2 CHAPTER 1. SYSTEMS WITH TIME DELAYS
is thus a logical candidate for the state vector of xD� . Under this choice, the state propagation equation
becomes
xD� W
8
ˆˆˆ<
ˆˆˆˆ:
xŒt C 1� D
2
6664
0 Im � � � 0::::::: : :
:::
0 0 � � � Im
0 0 � � � 0
3
7775xŒt �C
2
6664
0:::
0
Im
3
7775uŒt �
yŒt � D�
Im 0 � � � 0�
xŒt �
(1.4)
and agrees with (1.2). As a matter of fact, the observability matrix of this realization equals I� m and its
controllability matrix is the block-exchange matrix with m-dimensional blocks. Hence, realization (1.4)
is minimal. If the delay element is not assumed to be in its zero equilibrium at t D 0, a nonzero initial
condition xŒ0�, which is the history of its inputs in Z��::�1, can be introduced.
The delay element xD� is `2.ZC/-stable. This follows by the fact that all poles of its transfer function
in (1.2) are at the origin, i.e. in the open unit disk D. Another way to see that is via the readily verifiable
relation k xD�uk2 D kuk2, which holds for all u 2 `2.ZC/.
Remark 1.1 (varying delays). A natural generalization of xD� is the varying delay element xD�Œt�, which
acts as yŒt � D uŒt � �Œt �� for a function �Œt � � 0. This system is substantially knottier than the constant
delay element, even its dimension varies from step to step. Another, somewhat surprising, fact is that xD�Œt�
might be `2.ZC/-unstable for some �Œt �. For example, if �Œt � D t , then we have that yŒt � D uŒ0� for all
t 2 ZC. So the choice of uŒt � D ıŒt �, which is an `2.ZC/-signal, results in yŒt � D 1Œt �, which is not. Yet
it can be shown that xD�Œt� is `2.ZC/-stable if �Œt � is uniformly bounded, say by N� 2 N, with its induced
norm upperbounded byp1C N� > 1 then. O
1.1.2 Delay in continuous time
The continuous-time delay element xD� W u 7! y is similar to its discrete-time counterpart from (1.1). It is
defined as
y.t/ D u.t � �/ i.e. xD� uyt0 t !t0 t0 C � t ! (1.5)
for some constant delay � > 0 and all u W RC ! Rm. The similarity is not complete though. System (1.5)
is more complex than that in (1.1), chiefly because the former is infinite dimensional.
Remember that the dimension of a dynamic system is the size of its minimal possible state vector, its
smallest history accumulator. For the system in (1.5) the history, required to continue from a given time
point tc � 0, is clearly the whole trajectory of u in the time interval Œtc � �; tc�. Indeed, this is exactly what
we need to calculate y.t/ for t 2 Œtc; tc C ��. In other words, given an arbitrary time instance t � 0, we
have to know the function Mut W Œ��; 0� ! Rm such that
Mut .s/ D u.t C s/; 8s 2 Œ��; 0�; (1.6)
to determine the present and future values of y given the inputs from t on. This is a perfect analogy with
the discrete delay. Yet there is a qualitative difference between the discrete- and continuous-time cases.
The set of all discretem-dimensional functions over the finite interval Z��::�1 is a finite-dimensional linear
space, after all it is equivalent to the set of all �m-dimensional vectors, cf. (1.3). Unlike this, the set of all
continuous-time functions over the finite interval Œ��; 0� is an infinite-dimensional linear space, because
the number of linearly independent functions is unbounded there1. Hence, the continuous-time xD� in (1.5)
is an infinite-dimensional system. Its state is Mut defined by (1.6). But writing down corresponding state
equations would require advanced technical tools, which goes beyond the scope of these notes.
1For example, the functions fi .t/ ´ ej2�it=� are linearly independent for all i 2 ZC.
1.1. DELAY ELEMENTS AND THEIR DYNAMICS 3
0
10-1
100
101
-540
-360
-180
0
(a) Bode plot
Re
Im
e�j�!
�1 0
(b) Nyquist plot
-720 -540 -360 -180 0
-9
-6
-3
0
3
(c) Nichols chart
Fig. 1.1: Frequency response plots of the continuous-time delay element
If not stated otherwise, we assume that the initial conditions for (1.5) are zero, i.e. Mu0 D 0. In this casexD� actually equals the shift operator S� defines by (B.2). The delay element is LTI and stable. Indeed,
Œ xD� .˛uC ˇv/�.t / D .˛uC ˇv/.t � �/ D ˛u.t � �/C ˇv.t � �/ D ˛. xD�u/.t/C ˇ. xD�v/.t/
for all constants ˛ and ˇ and inputs u and v, which implies linearity. Because xD�S� D S�C� D S�xD� , we
have time invariance. And L2.RC/-stability follows, similarly to the discrete-time case, from the fact that
k xD�uk2 D kuk2 for all u 2 L2.RC/. The delay element is obviously causal and its impulse response is
d� .t / D ı.t � �/Im;
wherem is the dimension of u.t/ and y.t/. Thus, xD� can be analyzed in transformed domains, like Fourier
and Laplace.
The transfer function of xD� isxD� .s/ ´ Lfd�g D e��sIm: (1.7)
It is an irrational function of s, which is yet another indication that the continuous-time delay element is
an infinite-dimensional system. The function e��s is an entire function of s (i.e. holomorphic in the whole
C). It is also bounded in every right-half plane C˛ ´ fs 2 C j Re s > ˛g, as je��sj < e��˛ for all s 2 C˛
and � > 0. Hence, the transfer function xD� 2 H1, which is the set of all holomorphic and bounded
functions in C0. This is yet another proof that the continuous-time delay element is L2-stable and causal.
The frequency response of the delay element is
xD� .j!/ ´ Ffd� g D e�j�!Im: (1.8)
In the scalar case, m D 1, its magnitude and phase are quite simple:
j xD� .j!/j D 1 and arg xD� .j!/ D ��!: (1.9)
Thus, the frequency response magnitude of the delay element is unit for all frequencies and its phase
is a linearly decreasing function of !, i.e. the delay element adds a phase lag growing linearly with the
frequency. The Bode, Nyquist, and Nichols plots of xD� are presented in Fig. 1.1. Expressions (1.9)
facilitate the analysis of time-delay systems in the frequency domain, making it in some cases rather
intuitive and substantially simpler than the analysis in the time domain.
Remark 1.2 (varying delays). Like in the discrete-time case, we can generalize xD� as the varying delay
element xD�.t/, which acts as y.t/ D u.t � �.t// for some function �.t/ � 0. Curiously, this system might
be L2.RC/-unstable even if j�.t/j � N� for all t 2 RC and an arbitrary upper bound N� > 0 (consider finding
an example of such a delay function as a homework assignment). A yet more general situation is if the
delay depends not only on time, but also on its input, so that y.t/ D u.t � �.t; u.t///. Such a delay
element, xD�.t/;u, is nonlinear and its properties are yet more involved. O
4 CHAPTER 1. SYSTEMS WITH TIME DELAYS
(a) Sensing in rolling mill (b) Actuating via conveyor belt
Fig. 1.2: Examples of systems with sensing and actuating delays
1.2 Interactions of delays with other dynamics
One seldom faces processes containing the delay element alone. In most situations delays interact with
other dynamic processes. In this section some of such interactions are studied. For the sake of simplicity,
we mostly consider systems with a single and constant delay, which simplifies their analysis. Remarks on
multiple-delay systems and time-varying delays will be provided mostly to highlight potential differences.
1.2.1 Input and output delays
Arguably, the principal source of delays in feedback control applications are delays arising in the “control
pass,” i.e. in transferring information between the sensor and the actuator ends of the controller. These are
sensing delays, like that in measuring the thickness of a metal strip in rolling mills, see Fig. 1.2(a), where
X-ray gauge measurements are on a distance, say d , from the roll gap and have access to measure-
ments only � D d=v time units after the rolls, where v is the velocity of the exist strip;
actuation delays, like that in the conveyor belt in Fig. 1.2(b), where a material through which some pro-
cess is affected can reach the process only after � D l=v time units after injecting into the system,
where l is the length of the conveyor pass and v is its velocity;
communication delays, which are more and more common in light of the trend to distribute information
acquisition and processing, with the use of communication networks to exchange local information
between various components of control systems;
computational delays, which are inevitable if controllers are implemented on digital computers;
et cetera. From the plant modeling viewpoint, such delays can be viewed as input and / or output delays,
that is delays connected in series with a controlled plant. A good collection of examples of systems with
input / output delays arising in various, mostly process control, applications can be found in [51].
Let a plant P be LTI and input and output delays be uniform, say all input channels are delayed by
the same �u � 0 and all output channels are delayed the same by �y � 0. This yields xD�yP xD�u
as the new
plant. By the very time invariance, xD�yP xD�u
D P xD�yC�u, meaning that without loss of generality we may
regard such systems as input delay systems P xD� with the transfer function
P�.s/ D P.s/e��s; (1.10)
where � D �y C �u. Systems of form (1.10) are known as dead-time systems.
Although P� is infinite dimensional, its properties are relatively intuitive. The impulse response of P�
is p� .t / D p.t � �/, by definition, with support in Œ�;1/. The addition of the delay element does not alter
(in)stability properties of the delay-free P . As e��s is an H1 function, it does not add any instability.
In fact, it does not introduce additional poles, because it is entire. Moreover, e��s ¤ 0 for all s 2 C,
so it cannot cancel poles of P.s/. To gain insight into the structure of the state space of P� , consider
1.2. INTERACTIONS OF DELAYS WITH OTHER DYNAMICS 5
-20
-3
0
3
10-1
100
101
-540
-360
-180
-90
0
(a) Bode plot
Re
Im
P.j!/
P.j!/e�j�!
�1
(b) Nyquist plot
-720 -540 -360 -180 0
-20
-3
0
3
(c) Nichols chart
Fig. 1.3: Frequency response plots of P.s/ Dp2=.2�s C 1/ (dashed) and P�.s/ D P.s/e��s (solid)
its discrete-time counterpart, whose transfer function is P.´/´�� . This is a finite-dimensional system,
provided of course P is finite dimensional itself, and its state-space realization can be derived from those
of P.´/ D D C C.´I � A/�1B and the discrete delay element in (1.2) using (B.22) on p. 155:
P.´/´�� D�A B
C D
�
2
666664
0 Im � � � 0 0:::
:::: : :
::::::
0 0 � � � Im 0
0 0 � � � 0 Im
Im 0 0 0 0
3
777775
D
2
66666664
A B 0 � � � 0 0
0 0 Im � � � 0 0:::
::::::: : :
::::::
0 0 0 � � � Im 0
0 0 0 � � � 0 Im
C D 0 0 0 0
3
77777775
: (1.11)
The state vector of this system is the union of states of its components and includes thus both the state
of P and the history of the input signal over the last � steps, cf. (1.3). Following this logic, the state of
the continuous-time input-delay system P� , with the transfer function as in (1.10), at a time instance t is
.xP .t /; Mut/ 2 Rn � fŒ��; 0� ! R
mg, i.e. it comprises both the state xP of the delay-free P and the history
of the input signal over the time interval Œt � �; t �. The space of all such states is infinite dimensional.
In the SISO case the frequency response of P� can be easily derived from that of P . It follows from
(1.9) that
jP�.j!/j D jP.j!/j and argP�.j!/ D argP.j!/ � �!: (1.12)
In other words, the addition of the delay element does not alter the magnitude and adds extra phase lag,
proportional to the frequency. These properties facilitate the construction of frequency-response plots of
input-delay systems from those of their delay-free versions, see Fig. 1.3. Specifically, the Bode magnitude
plot remains unchanged and its phase part shifts downward by �! as shown in Fig. 1.3(a). Each point of
the Nyquist plot of P.j!/ rotates clockwise, with the rotation angle increasing with the frequency. This
normally results in spiral curves, like that in Fig. 1.3(b). Each point of the Nichols plot shifts leftward, see
Fig. 1.3(c), by a distance increasing with the frequency.
Remark 1.3 (multiple delays). It may happen that different input and / or output channels of P have dif-
ferent delays. For example, if a thickness profile of the strip in the rolling mill system in Fig. 1.2(a) is
controlled, then several points at different distances from the edge of the strip should be measured. Such
sensors are normally located at different distances from the roll gap as well, causing different measurement
delays. General multiple input and output delays can be described in the block-diagonal form diagf xD�u;ig
and diagf xD�y;ig, respectively, for some �u;i � 0 and �y;i � 0 and all relevant channel indices i . Such delay
elements are still stable and do not alter stability properties and pole locations of the plant. At the same
time, diagonal delays no longer commute with the plant, unless it is block-diagonal itself. This nontriv-
ially complicates the analysis of such systems. O
6 CHAPTER 1. SYSTEMS WITH TIME DELAYS
Remark 1.4 (distributed delays). A yet more general description of the interconnection of delays and other
dynamics is in the distributed-delay. An example of such a delay element is the system u 7! y acting as
y.t/ DZ �
0
˛.s/u.t � s/ds DZ t
t��
˛.t � s/u.s/ds (1.13)
for some (generalized) function ˛ W Œ0; � � ! Rq�m. This definition appears natural, as the integral can
be seen as a weighted sum of delays in the range Œ0; � � and standard lumped delays can be produced by
Dirac delta components of ˛. However, this definition is also somewhat confusing, by this logic any causal
convolution as in (B.3) is a distributed-delay system. Think of the case where � ! 1 and ˛.t/ D e�at1.t /
for some a > 0. This yields an ordinary first-order lag, whose treatment as a distributed-delay system
would only complicate matters. Therefore, the term “distributed delay” is barely used throughout this
text. When a system of form (1.13) with a finite � arises, it is referred to as an FIR (finite impulse
response) system. This is because (1.13) is a convolution representation of an LTI system, whose impulse
response has support over a finite time interval Œ0; � �. O
Remark 1.5 (varying delays). It may be safe to claim that constant lags are not widespread in applications.
For example, if the strip velocity in the the rolling mill system in Fig. 1.2(a) / the belt velocity in the
conveyor actuator in Fig. 1.2(b) varies, then the corresponding measurement / actuation delay varies with
time. In many cases such variations are small, so a constant-delay assumption is adequate. Still, there are
applications, like networked control, where delay variations cannot be neglected. Properties of systems
with time-varying delays might be less intuitive than those with constant delays. As an illustration, note
that Z
RC
p.t � s/u.s � �.s//ds ¤Z
RC
p.t � �.t/ � s/u.s/ds
in general. Hence, the effect of an input delay �.t/ is not equivalent to that of the same output delay and
vice versa. In fact, it might even happen that there is no equivalent output delay for a given non-constant
input delay. Such issues render the analysis of systems with varying delays substantially more involved
than that of systems with constant delays. O
Delays as a compact modeling tool
In some situations input delays are used as a convenient modeling tool to represent high-order dynam-
ics in a concise manner, with fewer parameters. Typical examples, omnipresent in process control, are
first-order-plus-time-delay (FOPTD) and second-order-plus-time-delay (SOPTD) models, which com-
prise first- or second-order dynamics connected in series with a delay element. Such models are suffi-
ciently rich to reflect complex dynamical phenomena with monotonic responses, while have only three or
four parameters (the static gain, time constants, and the delay) to identify.
To provide a flavor of this approach, consider a plant Pn with the transfer function of the form
Pn.s/ D 1
.s C 1/n
for a large enough n 2 N. This kind of model can describe n identical tanks, modeled as “flow 7! level”
systems, connected in series; a queue of n vehicles, modeled as integrators (“velocity 7! position”) and
whose control signals are proportional to the position mismatch between the current and the next vehicle;
et cetera. The frequency response of such a system has monotonically decreasing gain and phase, with a
large phase lag at high frequencies. It can then be beneficial to approximate these dynamics by lower-order
ones connected in series with a delay element to account for the high-frequency phase lag. For example,
for n D 5 and n D 8 the SOPTD approximations
P5;�.s/ D e�1:7236s
.1:6875s C 1/2and P8;� .s/ D e�3:8451s
.2:1626s C 1/2
1.2. INTERACTIONS OF DELAYS WITH OTHER DYNAMICS 7
1:7236 10 20 t
y.t
/
1
step response of P5.s/
step response of P5;�.s/
(a) n D 5
3:8451 10 20 t
y.t
/
1
step response of P8.s/
step response of P8;�.s/
(b) n D 8
Fig. 1.4: Step response of Pn and its second-order-plus-time-delay (SOPTD) approximation QPn
fit the step responses of the plant reasonably well, see Fig. 1.4.
1.2.2 Internal delays
In some systems delays arise not as a result of control path latencies, but rather in connection with internal
interactions. To illustrate this phenomenon, consider a process described by the one-dimensional wave
@t2D c2 @
2w.x; t/
@x2for 0 < x < L and t � 0; (1.14)
where w is a physical variable of interest evolving both in time t and in space x, c > 0 is the speed of
wave propagation in the medium, and L > 0 is the medium length. This kind of equations can describe a
number of processes propagating in one-dimensional media, like acoustic waves in a duct, vibrations of a
string, torsion of a rod, electrical transmission lines, et cetera. They vary in the nature of their excitation
and interaction with the surrounding environment, which may be lumped (i.e. via boundary conditions) or
distributed. Throughout this section we assume the former kind, which is simpler, of a type motivated by
acoustic waves in a cylindrical duct, see [8] and the references therein for details. In this case @w=@t and
@w=@x can be thought of as the velocity and (scaled) pressure of the air in a duct.
Assume that the system is excited only at x D 0 via the boundary condition
@w.0; t /
@tD u.t/ (1.15)
for some exogenous signal u.t/. Assume also that the other end, that at x D L, is passively connected to
the environment and the interaction of waves in the medium with its surrounding at x D L is characterized
as
� [email protected]; t/
@x
ˇˇˇxDL
D ZL
@w.L; t/
@t(1.16)
for some end impedance operator ZL, where Zm > 0 is the impedance of the free propagation in the
medium. The zero boundary impedance case ZL D 0 corresponds to the reflected and inverted wave.
The infinite impedance ZL D 1 implies that the end is sealed and waves reflect without inversion.
If ZL D Zm, then we effectively have a semi-infinite duct, with waves totally transmitted. The end
impedance need not be constant. In more realistic models ZL is a dynamic system, frequently LTI, whose
transfer function is positive real, i.e. such that ZL.s/ � 0 for all s 2 xC0. It is not uncommon to have
ZL.0/ D 0 and ZL.1/ D Zm.
The relation between u.t/ and w.x; t/ can be derived by taking the Laplace transform of (1.14) with
respect to t , which results in the ordinary differential equation [email protected]; s/ D s2W.x; s/with the boundary
conditions sW.0; s/ D U.s/ and sZL.s/W.L; s/C [email protected]; s/ D 0, where @xf ´ @f=@x and @2xf ´
@2f=@x2. The solution to this ODE is
�W.x; s/
@xW.x; s/
�
D exp
��0 1
s2=c2 0
�
x
� �W.0; s/
@xW.0; s/
�
D�
cosh.sx=c/=s c sinh.sx=c/=s
sinh.sx=c/=c cosh.sx=c/
� �U.s/
@xW.0; s/
�
;
8 CHAPTER 1. SYSTEMS WITH TIME DELAYS
-10
0
10
10-1
100
101
-90
0
90
(a) ZL.s/ D s=.s C 1/ and R.s/ D �1=.2s C 1/
-40
-20
0
20
40
10-1
100
101
-90
0
90
(b) ZL.s/ D 0 and R.s/ D �1
Fig. 1.5: Bode plots of G0.s/ from (1.19) for Zm D 1 and � D 5
where @xf .0; t / is meant for @xf .x; t/jxD0. The second boundary condition reads then
�
sZL.s/ cZm
��
cosh.sL=c/=s c sinh.sL=c/=s
sinh.sL=c/=c cosh.sL=c/
� �
U.s/
@xW.0; s/
�
D 0;
which is solved by [email protected]; s/ D V0.s/U.s/, where
V0.s/ ´ �1C R.s/e�2.L=c/s
1� R.s/e�2.L=c/sfor R.s/ ´ ZL.s/ �Zm
ZL.s/CZm
: (1.17)
The parameter R.s/ is known the reflectance of the system at its end. If ZL.s/ is positive real, then we
have that jR.s/j � 1 for all s 2 xC0 and the equality holds iff ZL.s/ D 0 (think of it as Tustin’s transform
of �ZL=Zm). This is an important property of the considered system. Thus, we have that
�
sW.x; s/
�
D�
cosh.sx=c/ sinh.sx=c/
sinh.sx=c/ cosh.sx=c/
� �
1
V0.s/
�
U.s/: (1.18)
If we are interested in the effect of u.t/ on px.t / D �[email protected]; t/=@x (think of it as the pressure) at
the very point x D 0 of its application, then the transfer function of the system G0 W u 7! p0 is
G0.s/ D �V0.s/Zm D 1CR.s/e��s
1�R.s/e��sZm; (1.19)
where � ´ 2L=c > 0 is the time that takes a wave to travel to the end of the medium and back. This
transfer function includes a delay element as its internal part, which renders G0.s/ nontrivially more
tangled than its delay-free part R. It might not be easy even to derive a closed-form impulse response of
this system. This is possible in some special cases, for example for R.s/ D �1 the impulse response is the
impulse train g0.t / D ı.t/�2ı.t� �/C2ı.t�2�/�2ı.t �3�/C � � � , cf. (B.4). Also, the Bode plots of the
frequency-response G0.ej!/, shown in Fig. 1.5 for two different simple R.s/, exhibit a rich behavior, with
numerous resonances, even though only a few parameters are required to model this system. Moreover, not
for every stable R this G0 is stable. For example, the system with R.s/ D �1, whose frequency response
is depicted in Fig. 1.5(b), is unstable (stability issues in delay systems are discussed in Chapter 2).
Another complication is the location of poles of G0.s/. They are among the roots of the function
�� .s/ D MR.s/ � NR.s/e��s; (1.20)
where NR.s/ and MR.s/ the numerator and denominator polynomials of R.s/, assuming the latter is
rational. This ��.s/ is not a polynomial in s as it contains the transcendental term e��s. Such functions
1.2. INTERACTIONS OF DELAYS WITH OTHER DYNAMICS 9
u´1w1px
´2 w2
´3w3
ZmxD�1
R xD�2
R xD�2
(a)
u´1w1px
´2 w2
´3w3
ZmxD�1
R xD�2
R xD�2
(b)
Fig. 1.6: Block-diagrams of Gx , whose transfer function is given by (1.21)
are known as quasi-polynomials. In order to provide a flavor of difficulties associated with analyzing its
roots, consider two simple cases with reflectances from Fig. 1.5. For R.s/ D �1 we have �� .s/ D 1Ce��s
and every si D j.2i C 1/�=� , i 2 Z, as its root. In other words, this ��.s/ has infinitely many roots. This
is typical to quasi-polynomials. Not quite typical is the possibility to have these roots analytically. In fact,
this possibility is essentially limited to the case when both NR.s/ and MR.s/ are constants. For example,
if R.s/ D �1=.2s C 1/, as in Fig. 1.5(a), then (1.20) reads �� .s/ D 2s C 1C e��s and its roots cannot be
expressed analytically. Nevertheless, some of its modal properties can be analyzed. For example, because
the equality j2s C 1j D je��sj must hold true at every root of this �� .s/, we can conclude there are no
roots in the closed right-half plane xC0. This is because j2s C 1j > 1 at all points there except the origin,
whereas je��sj � 1 for s 2 xC0. A simple check yields then that �� .0/ D 2 ¤ 0 in this case. More details
about properties of quasi-polynomials are presented in Section 2.1.
Yet more complex dynamics connect the exogenous input u with the pressure px at internal points
x 2 .0; L/. The transfer function of the system Gx W u 7! px , which is derived from (1.18), is
Gx.s/ D e��1s CR.s/e��2s
1 �R.s/e�.�1C�2/sZm; (1.21)
where �1 ´ x=c and �2 ´ .2L� x/=c are the times that take the original and reflected wave to reach the
given x. Two alternative block-diagram representations of this system are presented in Fig. 1.6. Note that
if the reflectance is zero, R.s/ D 0, implying that the end impedance matches that of medium, then this
system reduces to the scaled delay element xD�1Zm.
1.2.3 General interconnections
In light of the proliferation of interconnection configurations of the delay element(s) with finite-dimen-
sional dynamics, it may be convenient to have a unified representation of systems involving delays. A
possible choice for such a representation is presented in Fig. 1.7, where xD� is the m� �m� delay element
and
G D�
G´w G´u
Gyw Gyu
�
is a finite-dimensional, i.e. delay-free, system. This is an upper linear-fractional transformation, denoted
as Fu.G; xD� /. It defines the system (plant)
P� D Fu.G; xD�/ D Gyu CGywxD� .I �G´w
xD� /�1R´u:
All single-delay systems studied so far can be viewed as particular cases of this setup for an appropriate
choice of G. The dead-time system as in (1.10) corresponds to
G D Ginpd ´�0 I
P 0
�
(1.22)
10 CHAPTER 1. SYSTEMS WITH TIME DELAYS
�
G´w G´u
Gyw Gyu
�
xD�
w´
y u
Fig. 1.7: General single-delay interconnection for P� W u 7! y
and the wave equation with the transfer function (1.19)—to
G D Gwave,0 ´�
R Zm
2R Zm
�
: (1.23)
Note that having a nilpotent “G´w” part implies that the dependence ofP� on the delay is affine (or linear, if
Dyu D 0 as well). The choice of G producing a given P� is actually non-unique. Because xD�M D M xD�
for every time-invariant M , systems can be moved from the “´” to “w” channel without affecting the
operator P� . In other words, if there is a multiplier M such that G´w D QG´wM and Gyw D QGywM , then
G D� QG´w G´u
QGyw Gyu
� �
M 0
0 I
�
$�
M 0
0 I
� � QG´w G´u
QGyw Gyu
�
µ QG
and Fu.G; xD� / D Fu. QG; xD�/ regardless of M . This transformation can always be carried out for a square
and nonsingular M . In some situations it is also possible to use non-invertible and even non-square
multiplies, which might help in reducing problem dimensions as is seen in the example below.
Example 1.1. Let
G.s/ D
2
666664
0 1 1 1 0
0 0 1 1 1
1 0 0 0 0
0 1 0 0 0
1 0 0 0 0
3
777775
D
2
666664
0 1 1 0
0 0 1 1
1 0 0 0
0 1 0 0
1 0 0 0
3
777775
�1 1 0
0 0 1
�
;
which interacts with a 2 � 2 delay element xD� . Taking M D�
1 1�
, the system can be equivalently
generated by
QG.s/ D�1 1 0
0 0 1
�
2
666664
0 1 1 0
0 0 1 1
1 0 0 0
0 1 0 0
1 0 0 0
3
777775
D
2
664
0 1 1 0
0 0 1 1
1 1 0 0
1 0 0 0
3
775
interacting with a scalar delay, which is a more economical description. O
Remark 1.6 (multiple delays). Systems with multiple delays can also be presented in the form of Fig. 1.7.
It is only required to replace the single-delay operator xD� with its block-diagonal counterpart diagf xD�ig.
For instance, the wave equation with the transfer function (1.21) can be expressed in this form by removing
all three delay blocks at the block-diagram in Fig. 1.6(b) and connecting the inputs .w1; w2; w3; u/ with
the outputs .´1; ´2; ´3; y/ for y D px ,
u´1w1y
´2 w2
´3w3
ZmxD�1
R xD�2
R xD�2
H) G D Gwave,x ´
2
664
0 1 0 Zm
R 0 0 0
0 1 0 Zm
1 0 R 0
3
775
(1.24)
1.2. INTERACTIONS OF DELAYS WITH OTHER DYNAMICS 11
and taking diagf xD�1; xD�2
g as its delay element, where xD�1is scalar and xD�2
is 2 � 2. O
The separation of delay-free and delayed parts is conceptually appealing. It facilitates manipulating
dime-delay systems. For example, closing a feedback loop between y and u of the form
u D K.y C v/
for some delay-free “controller” K and a “reference signal” v results in the the same configuration, just
now with a different, yet still delay-free, “G” part and v instead of u. For example, for a dead-time system
with G as in (1.22) we end up with
G D GPK ´�
KP K
P 0
�
; (1.25)
which can be verified by simple signal tracing (´ D K.y C v/ D KPw CKv). This GPK has a nonzero
“G´w” part, meaning that it is no longer a system with input delay. The separation in Fig. 1.7 is also
convenient in numerical simulations (this is how delay systems are implemented in MATLAB, as a matter
of fact), since standard tools can be used for the finite-dimensional part and all infinite-dimensionality is
concentrated in the relatively simple pure delay element. For example, simulating Fu.Gwave,0; xD�/ may
be easier than that of the corresponding wave PDE.
The separation of the delay element from the delay-free dynamics also facilitates the use of convenient
state-space machinery in analyzing time-delay systems. Bring in a minimal state-space realization of the
transfer function
G.s/ D�
G´w.s/ G´u.s/
Gyw.s/ Gyu.s/
�
D
2
4
A Bw Bu
C´ D´w D´u
Cy Dyw Dyu
3
5 : (1.26)
This transfer function defines the following time-domain relation:
G W
8
ˆ<
:
Px.t/ D Ax.t/C Bww.t/C Buu.t/
´.t/ D C´x.t/CDyww.t/CD´uu.t/
y.t/ D Cyx.t/CDyww.t/CDyuu.t/
These equations should be complemented by the delayed relation w.t/ D ´.t � �/, resulting in
P� W
8
ˆˆ<
ˆˆ:
�
Px.t/´.t/
�
D�
A Bw
C´ D´w
� �
x.t/
´.t � �/
�
C�
Bu
D´u
�
u.t/
y.t/ D�
Cy Dyw
��
x.t/
´.t � �/
�
CDyuu.t/
(1.27)
describing the mapping P� W u 7! y. Equations of this kind are known as delay-differential equations
(DDEs), aka differential-difference equation, which are a subclass of functional-differential equations.
The “x” part of its propagation is governed by a differential equation and the “´” part—by a delay (dif-
ference) equation. Equation (1.27) is a general single-delay DDE. Its state at every time instance t � 0
comprises the state x.t/ of G and the whole time history of ´ over the interval Œt � �; t � (c.f. the discussion
in ÷1.1.2), i.e. it is .x.t/; M t/ 2 Rn � fŒ��; 0� ! R
m� g. The transfer function of P� is readily derived using
standard properties of the Laplace transform and assuming zero history in t � 0:
P�.s/ D Dyu C�
Cy Dywe��s��
sI � A �Bwe��s
�C´ I �D´we��s
��1 �
Bu
D´u
�
: (1.28)
This is normally an irrational function of s. Namely, each its element is a quotient of quasi-polynomials
of a more general form than (1.20) (see ÷2.1.1 for details).
Form (1.27) of delay-differential equations in is not quite orthodox. Two its special cases, which are
normally studied in the literature, are presented below.
12 CHAPTER 1. SYSTEMS WITH TIME DELAYS
1. If D´w D 0, then the second row of (1.27) reads ´.t/ D C´x.t/ C D´uu.t/. Substituting this
expression to the first row, we end up with the following dynamics:
Px.t/ D Ax.t/C A�x.t � �/C Buu.t/C B�u.t � �/ (1.29)
for A� ´ BwC´ and B� ´ BwD´u. This is a conventional form of so-called (single-delay) retarded
DDEs, in which the derivative term does not include delays.
2. IfD´w ¤ 0, but there exists a square matrix E� such that BwD´w D �E�Bw (take the lowest rank E�
satisfying it), then pre-multiplying the second row of (1.27) by Bw and using the first row we have:
Bw´.t/ D BwC´x.t/ �E�Bw´.t � �/C BwD´uu.t/
D BwC´x.t/ �E� . Px.t/ � Ax.t/ � Buu.t//C BwD´uu.t/:
Hence, we end up with the dynamical equation
Px.t/CE� Px.t � �/ D Ax.t/C A�x.t � �/C Buu.t/C B�u.t � �/ (1.30)
for A� ´ BwC´ CE�A and B� ´ BwD´u CE�Bu. This is a conventional form of so-called neutral
DDEs (single-delay, again), in which the derivative term is delayed as well.
Throughout this text the general DDE (1.27) corresponding to the setup in Fig. 1.7 is preferred, partially
by pure aesthetic considerations.
1.3 Finite-dimensional approximations of the delay element
The continuous-time delay element (1.5) is infinite dimensional and so are its interconnections. To avoid
associated complications, one may consider to approximate the delay element by a finite-dimensional
system, so that standard methods can be used. Such approximations are addressed in this section.
Remark 1.7 (to approximate, or not to approximate). Before discussing approximations techniques, a brief
disclaimer is in order. One of the central themes of these notes is the exploitation of the structure of the
delay element, especially in various control design problems. Approximating delays by finite-dimensional
systems might damage this structure, rendering the problem in hand less transparent. Approximations are
thus suggested to be used as merely a convenience, e.g. a tool for performing quick initial screening or
simulations. As such, their simplicity and transparency are preferable to accuracy to some extent. O
First, consider perspectives of approximating the pure delay element xD� by a stable finite-dimensional
LTI system, say R� . We say that R� approximates xD� if the approximation error �R ´ k xD� � R�k is
“small,” where k�k stands for a system norm of choice. Consider the H1 norm as the measure to the
approximation accuracy, in which case
�R D k xD� �R�k1 D sup!2R
j xD� .j!/ �R�.j!/j D sup!2R
je�j�! �R�.j!/j:
Because R�.s/ is rational, the argument of its frequency response is bounded, it approaches some finite
value as ! grows. At the same time, it follows from (1.9) that the phase lag of xD� .j!/ is unbounded.
Hence, there is an infinite increasing sequence of frequencies, say f!igi2N, such that
argR�.j!i/ � arg xD�.j!i / D � C 2�ki ” ej.arg R� .j!i /�arg xD� .j!i // D �1
for some ki 2 Z. At those frequencies we have that
j xD� .j!i/ �R�.j!i/j D j xD� .j!i /j C jR�.j!i /j D 1C jR�.j!i/j:
1.3. FINITE-DIMENSIONAL APPROXIMATIONS OF THE DELAY ELEMENT 13
Therefore,
�R � 1C supi2N
jR�.j!i/j � 1 D k xD� � 0k1
The lower bound �R D 1 above is attained only if jR�.j!i /j D 0 for all i 2 N. Because R�.s/ is rational,
the only possible choice here is R� D 0. In other words, the best finite-dimensional approximation ofxD� is the zero system. This optimal approximation is useless and effectively implies that any attempt to
approximate the pure delay element in the H1 metric2 is futile.
However, we might never need to approximate the delay element over the whole frequency range. In
most engineeringly-motivated control problems only a finite frequency band is of interest, just because
realistic processes have finite bandwidths. As the phase lag of xD� over any finite frequency range is
finite, the approximation problem in this setting does make sense. A possible approach to that end is to
approximate F xD� for a stable low-pass filter F , whose bandwidth defines the frequency range of interest.
There are various approaches to approximate F xD� by finite-dimensional systems, see [56, Sec. 6.3]
and the references therein for an overview. Similarly to model order reduction methods for finite-dimen-
sional systems, delay approximation methods can be roughly divided into those, oriented on singular value
decompositions, and those, based on power series expansions of the delay element in the Laplace domain.
The first group has normally built-in stability and performance guarantees and are thus more accurate.
An example is the balanced truncation procedure, popular for finite-dimensional systems and based on
calculating Hankel singular values of the original system and corresponding Schmidt pairs (singular vec-
tors). This approach does extend to systems of the form F xD� , with calculable Hankel singular values and
Schmidt pairs. Yet involved calculations entail transcendental equations and are not quite easy to use, even
in the simplest case of a first-order F.s/. For that reason, results based on singular value are far less pop-
ular than those based on power series expansions. The idea there is to truncate a power expansion of e��s ,
or a function involving it, up to some term and under certain additional (interpolation) constraints. Such
methods are frequently rather easy to implement. On the downside, accuracy and even stability are often
not their natural by-product. For instance, it might appear natural to approximate the transfer function ofxD� via the relation
e��s D e��s=2
e�s=2� Qn.��s=2/
Qn.�s=2/;
where the polynomial Qn.s/ ´ 1C s C s2=2ŠC � � � C sn=nŠ is the nth order partial sum of the Maclaurin
series of es . However, this Qn.s/ is Hurwitz only for n � 4, which renders this approach quite limited.
1.3.1 Pade approximant of e��s
Perhaps the best known truncation-based approach is to approximate the delay element by its Pade ap-
proximant of given degrees. Consider first its general logic. Let �.s/ be a complex function, analytic in
some neighborhood of the origin. A rational function
Rm;n.s/ D Pm.s/
Qn.s/D pms
m C � � � C p1s C p0
qnsn C � � � C q1s C 1;
is said to be the Œm; n�-Pade approximant of �.s/ if the first nCmC1 terms of the Maclaurin series of �.s/
and Rm;n.s/ coincide. In other words, the Œm; n�-Pade approximant matches �.s/ and its nCm derivatives
at s D 0, i.e. �.i/.0/ D R.i/.0/ for all i 2 Z0::nCm. An alternative condition for the Pade approximant,
which is useful for computing the required coefficients qi and pi , is that �.s/�Rm;n.s/ D O.smCnC1/ or,
equivalently,
�.s/Qn.s/ � Pm.s/ D O.smCnC1/ as s ! 0: (1.31)
2The same conclusion is immediate for approximating xD� in the A-norm, which is the L1.RC/-induced system norm.
14 CHAPTER 1. SYSTEMS WITH TIME DELAYS
This Bezout-like identity is solvable iff �.s/ and �1 are coprime, i.e. have no common roots, which is
obviously always true. Hence, the sought polynomials always exist and are unique. The mC n C 1 free
coefficients of Pm.s/ and Qn.s/ can be selected from (1.31) by the use of the Maclaurin series expansion
of �.s/ and zeroing the coefficients of all powers of s from 0 to mC n. The latter procedure can, in turn,
be expressed as the following set of n C m C 1 linear equations (here we assume that n � m, i.e. that
Rm;n.s/ is proper):
2
6666666666666664
�0 0 � � � 0 0 � � � 0 �1 0 � � � 0
�1 �0 � � � 0 0 � � � 0 0 �1 � � � 0:::
:::: : :
::::::
::::::
:::: : :
:::
�m �m�1 � � � �0 0 � � � 0 0 0 � � � �1�mC1 �m � � � �1 �0 � � � 0 0 0 � � � 0:::
::::::
:::: : :
::::::
::::::
�n �n�1 � � � �n�m �n�m�1 � � � �0 0 0 � � � 0:::
::::::
::::::
::::::
:::
�nCm �nCm�1 � � � �n �n�1 � � � �m 0 0 � � � 0
3
7777777777777775
„ ƒ‚ …
permutated Sylvester matrix
2
666666666666666666664
1
q1
:::
qm
qmC1
:::
qn
p0
p1
:::
pm
3
777777777777777777775
D 0; (1.32)
where �i D �.i/.0/=iŠ for all i 2 ZC are the coefficients of the Maclaurin expansion of �.s/. Thus, all we
need is to solve n linear equations (the last n rows above) in qi and then calculate the coefficients of Pm.s/
from the first mC 1 rows of (1.32). As a matter of fact, those n equations for the coefficients of Qn.s/ are
of the form Mq D b for an n� n Toeplitz matrix M . This structure can be exploited to solve the equation
more efficiently.
Our interest is the Œn; n�-Pade approximant of e��s . So consider first
�.s/ D es D1
X
iD0
1
iŠsi µ
1X
iD0
�i si :
The Œn; n�-Pade approximant of the transfer function e��s of the delay element xD� is then obtained by the
substitution s ! ��s. Also, although any m � n can be considered, the choice m D n appears natural
and is by far most common. Still, all arguments below extend to m < n seamlessly. For m D n equality
(1.32) reads
2
66666666664
�0 0 � � � 0 �1 0 � � � 0
�1 �0 � � � 0 0 �1 � � � 0:::
:::: : :
::::::
:::: : :
:::
�n �n�1 � � � �0 0 0 � � � �1�nC1 �n � � � �1 0 0 � � � 0:::
:::: : :
::::::
::::::
�2n �2n�1 � � � �n 0 0 � � � 0
3
77777777775
2
66666666666664
1
q1
:::
qn
p0
p1
:::
pn
3
77777777777775
D 0: (1.33)
Define
�.t/ D
2
64
�1.t /:::
�n.t /
3
75 ´
2
64
nC1.t / n.t / � � � 1.t /:::
:::: : :
:::
2n.t / 2n�1.t / � � � n.t /
3
75
2
6664
1
q1
:::
qn
3
7775; where i.t / ´ t i
i Š
1.3. FINITE-DIMENSIONAL APPROXIMATIONS OF THE DELAY ELEMENT 15
(so that �i D i.1/). Clearly, the last n rows in the left-hand side of (1.33) are exactly �.1/. Hence,
the choice of Qn.s/ in the Œn; n�-Pade approximant of es is equivalent to the choice of n scalars qi such
that �.1/ D 0. Two more facts, which follow directly from the definition of i.t /, are important. First,
�.0/ D 0 for all qi . Second, i.t / D P iC1.t / for all i 2 N, so that �i .t / D �.n�i/n .t /. Thus, we should look
for a
� polynomial function �n.t / of order 2n such that �.i/n .t / D 0 at t D 0 and t D 1 for all i 2 Z0::n�1.
All such polynomials can be described as ˛tn.t � 1/n for some constant ˛. Hence, we have the equality
�n.t / D t2n
.2n/ŠC
nX
iD1
t2n�i
.2n� i/Š qi D ˛tn.t � 1/n; 8t 2 Œ0; 1�: (1.34)
It follows then by the binomial theorem that ˛ D 1=.2n/Š and
qi D .�1/i�n
i
�.2n� i/Š.2n/Š
D .�1/i .2n� i/Š nŠ.2n/Š .n� i/Š i Š
for all i 2 Z1::n. Note that an alternative expression for qi can be obtained by differentiating (1.34) 2n� itimes at t D 0, to have
qi D �.2n�i/n .0/
It is only left to obtain the coefficients pi from the first nC1 rows of (1.33). The first row yields p0 D 1.
By repeating the arguments about the last n rows and taking into account that �.i/n .1/ D .�1/i�.i/
n .0/, we
have for all i 2 Z1::n:
pi D �.2n�i/n .1/ D .�1/2n�i�.2n�i/
n .0/ D .�1/iqi :
This implies that Pn.s/ D Qn.�s/ and the resulting Rn;n.s/ is all-pass, in the sense that jRn;n.j!/j D 1
for all ! 2 R.
Summarizing, the Œn; n�-Pade approximant of the delay element xD� has the transfer function
e��s � Rn;n.�s/ D Qn.��s/Qn.�s/
; where Qn.s/ Dn
X
iD0
�n
i
�.2n� i/Š.2n/Š
si (1.35)
(mind substituting s ! ��s to derive the approximant of e��s from that of es). The five lowest-order
polynomials Qn.s/ are
Q1.s/ D 1C 1
2s;
Q2.s/ D 1C 1
2s C 1
12s2;
Q3.s/ D 1C 1
2s C 1
10s2 C 1
120s3;
Q4.s/ D 1C 1
2s C 3
28s2 C 1
84s3 C 1
1680s4;
Q5.s/ D 1C 1
2s C 1
9s2 C 1
72s3 C 1
1008s4 C 1
30240s5:
Two properties of the Pade approximation method are given below without proofs. The first one is
about the stability of Rn;n, which should be an essential property of any approximation of xD� :
Proposition 1.1. The polynomials Qn.s/ defined in (1.35) are Hurwitz for all n 2 N.
16 CHAPTER 1. SYSTEMS WITH TIME DELAYS
-10
0
10
-90
0
90
10-1
100
101
-160
-80
-1.4
(a) R.s/ D �1=.2s C 1/
-40
-20
0
20
40
-90
0
90
10-1
100
101
-160
-80
0
80
(b) R.s/ D �1
Fig. 1.8: G0.s/ from (1.19) for Zm D 1 and � D 5 with Œ4; 4�- and Œ12; 12�-Pade approximants of the delay
The second result provides a simple, yet tight at very low frequencies, upper bound on the approxi-
mation error in the frequency domain. To formulate it, we need the normalized Butterworth polynomials,
which are k-order Hurwitz polynomials Bk.s/ satisfying jBk.j!/j2 D 1C !2k. For k D 2nC 1
B2nC1.s/ D .s C 1/
nY
iD1
�
s2 C 2 cos�nC 1 � i2nC 1
��
s C 1
�
and its roots are equidistant on the left unit semi-circle.
Proposition 1.2. The Pade approximant (1.35) satisfies je�j! � Rn;n.j!/j � 2:02jH2nC1.j!/j for all
! 2 R, where
H2nC1.s/ D .s=!n/2nC1
B2nC1.s=!n/
is the high-pass Butterworth filter, whose cutoff frequency
!n ´�
2.2n/Š.2nC 1/Š
.nŠ/2
�1=.2nC1/
is such that f!ng D f2:8845; 4:2823; 5:7251; 7:1814; 8:6435; : : :g and limn!1 !n=n D 4=e � 1:4715.
Although the bound of Proposition 1.2 is conservative, it does show that the Pade approximant is accurate
at low frequencies and the approximations accuracy increases with n. These conclusions are intuitive.
Indeed, the equality e�s � Rn;n.s/ D O.s2nC1/, which defines the method, effectively says that Rn;n.s/
interpolates e�s and its 2n derivatives at the origin, i.e. at ! D 0. An immediate consequence of the result
of Proposition 1.2 is that k. xD1 �Rn;n/F k1 � 2:02kH2nC1F k1 for all F 2 H1. If F is a low-pass filter,
then the lower its bandwidth is, the smaller is kH2nC1F k1. This is also intuitive.
To illustrate traits of Pade approximants, consider the system G0 driven by the wave equation (1.14),
whose transfer function is given by (1.19) on p. 8 and the frequency responses are presented in Fig. 1.5. If
the reflectance R has a finite bandwidth, i.e. if its transfer function R.s/ is strictly proper, then the delay
has a limited effect on the high-frequency dynamics of G0. We can then expect that the use of a Pade
approximant of a sufficiently high degree will result in a good approximation of G0. This is indeed the
case, as can be seen from the plots in Fig. 1.8(a) for the strictly proper R.s/ D �1=.2s C 1/, representing
the original system (thin pale line) and its Œ4; 4�- and Œ12; 12�-Pade approximants (thick lines). It is also
clear that the higher-order Pade results in a higher approximation accuracy. The situation is different if
the bandwidth of R.s/ is infinite. As no finite-dimensional approximation of the delay element succeeds
1.3. FINITE-DIMENSIONAL APPROXIMATIONS OF THE DELAY ELEMENT 17
over all frequencies, the high-frequency dynamics of G0 can no longer be captured by its approximation.
This is seen from the plots in Fig. 1.8(b), where the approximation error for the bi-proper R.s/ D �1 is
unbounded, no matter what degree of the Pade approximant is chosen.
18 CHAPTER 1. SYSTEMS WITH TIME DELAYS
Chapter 2
Stability Analysis
STABILITY is a vital property of every control system. So the first analytic chapter of these notes ad-
dresses the stability analysis of the single-delay system P� W u 7! y in Fig. 2.1 for a finite-dimensional
G, given in terms of its state-space realization (1.26), and the m� �m� delay element xD� defined by (1.5).
We assume throughout the chapter that there is no redundancy in the system, in the sense that (1.26) is
minimal and the i/o dimensions of xD� are irreducible (cf. Example 1.1 on p. 10). Both i/o stability (in
the L2 sense) and Lyapunov stability are studied below. In both these cases the analysis reduces to that
of autonomous versions of the system, much in parallel to what happens in the finite-dimensional case.
Details are more delicate and involved though.
2.1 Modal methods
We start with studying the i/o stability of the system in Fig. 2.1. We say that this system is stable if the
transfer function P�.s/ of P� W u 7! y, given by (1.28), belongs toH1, which is the space of holomorphic
and bounded functions in C0, see (B.17). It is well known that in the finite-dimensional case P� 2 H1 iff
P�.s/ is proper and its poles are in the open left-half plane C n xC0. An appropriately defined properness
is also required in delay systems and it always holds for the system in Fig. 2.1 with a proper G.s/. In this
section we thus concentrate on “poles” of P� .s/ or, more accurately, on its characteristic function.
2.1.1 Characteristic function of delay-differential equations
The notion of the characteristic function plays an important role in the analysis of dynamic systems. It
reflects the so-called free motion of the state of a system, i.e. the possible behavior of the state in the
absence of exogenous inputs. In the finite-dimensional case the situation is straightforward. If its free
motion is described by Px.t/ D Ax.t/ or, equivalently, .sI � A/X.s/ D 0, then nontrivial solutions can
only exist at the roots s D �i of det.sI � A/ D 0. These solutions are then always of the formP
i e�i txi ,
where xi solve .�iI � A/xi D 0, i.e. either zero vectors or eigenvectors.
�
G´w G´u
Gyw Gyu
�
xD�
w´
y u
Fig. 2.1: General single-delay interconnection
19
20 CHAPTER 2. STABILITY ANALYSIS
Conceptually, the characteristic function for delay systems, like that in Fig. 2.1, is still defined via the
existence of a nontrivial solution to the free motion of its state. But now the state, that of of DDE (1.27),
is .x.t/; M t/ 2 Rn � fŒ��; 0� ! R
m� g, i.e. it is infinite dimensional. The derivation of the exact form of the
corresponding characteristic equation requires the use of more advanced techniques, which goes beyond
the scope of these notes (see e.g. [9, Thm. 2.4.6]). So we again make use of the discrete case to gain
insight into the issue, circumventing technical complications.
Consider the discrete-time version of the interconnection in Fig. 2.1 with the state equation of G of
the form xŒt C 1� D AxŒt �CBwwŒt �CBuuŒt � and with ´Œt � D C´xŒt �CD´wwŒt �CD´uuŒt �. Taking into
account the relation wŒt � D ´Œt � ��, the free motion equation for this system satisfies
�
xŒt C 1�
´Œt �
�
D�
A Bw
C´ D´w
� �
xŒt �
´Œt � ��
�
: (2.1)
This is not yet a standard (first-order) state equation. But following the logic of the developments in ÷1.1.1,
we can easily derive it as
x� Œt C 1� D
2
666664
A Bw 0 � � � 0
0 0 Im�� � � 0
::::::
:::: : :
:::
0 0 0 � � � Im�
C´ D´w 0 � � � 0
3
777775
x� Œt �; where x� Œt � ´
2
666664
xŒt �
´Œt � ��:::
´Œt � 2�´Œt � 1�
3
777775
: (2.2)
The corresponding characteristic polynomial is obviously
�� .´/ D det
2
66666664
´In � A �Bw 0 0 � � � 0 0
0 ´Im��Im�
0 � � � 0 0
0 0 ´Im��Im�
� � � 0 0:::
::::::
::::::
:::
0 0 0 0 � � � ´Im��Im�
�C´ �D´w 0 0 � � � 0 ´Im�
3
77777775
:
This expression can be simplified via applying (A.5b) to the partition above with .� � 1/m� � .� � 1/m�
lower-right sub-block and using the equality
2
666664
´I �I � � � 0 0
0 ´I � � � 0 0:::
::::::
:::
0 0 � � � ´I �I0 0 � � � 0 ´I
3
777775
�1
D
2
666664
´�1I ´�2I � � � ´1��I ´��I
0 ´�1I � � � ´2��I ´1��I:::
::::::
:::
0 0 � � � ´�1I ´�2I
0 0 � � � 0 ´�1I
3
777775
(assuming � blocks). Standard manipulations with determinants yield then the characteristic polynomial
�� .´/ D det
�
´I � A �Bw
C´ ´�I �D´w
�
D ´� m� det
�
´I � A �Bw´��
C´ I �D´w´��
�
:
The second factor in the last equality above can be viewed as resulting from the ´-transform of equation
(2.1) and the factor ´� m� is merely the denominator of the transfer function of xD� .
Returning to the continuous-time system, the free motion counterpart of (2.1) for it is
�Px.t/´.t/
�
D�A Bw
C´ D´w
� �x.t/
´.t � �/
�
;
2.1. MODAL METHODS 21
which is readily obtained from DDE (1.27). It is then not surprising that the characteristic function of that
DDE is
��.s/ ´ det
�
sI � A �Bwe��s
�C´ I �D´we��s
�
(2.3)
(the “missing” term e� m� s has no zeros and can therefore be omitted). The matrix in the right-hand side
of (2.3) can be seen as a part of P�.s/ in (1.28). It can be shown that this a function of the form
��.s/ Dk
X
iD0
Qi.s/e�i�s (2.4)
for some k 2 N and finite polynomials Qi.s/ such that degQ0.s/ � degQi.s/ for all i 2 Z1::k . As we
already know from the discussion on p. 8, this kind of functions is known as quasi-polynomials. Because
Q0.s/ D det.sI � A/, its roots are the eigenvalues of A. If the condition degQ0.s/ > degQi .s/ holds
for all i 2 Z1::k , then the quasi-polynomial (2.4) is termed retarded. Otherwise, i.e. if there is at least one
j 2 Z1::k such that degQ0.s/ D degQj .s/, it is called neutral. The same terminology is used with respect
to DDE (1.27) �� .s/ itself. There are classes of delay systems, whose characteristic quasi-polynomials
have degQ0.s/ < degQj .s/ for at least one j . They are known as advanced and not studied in this text.
They never result from the system in Fig. 2.1 with a proper G.s/ and should not be expected in realistic
applications. If k D 1, then �� .s/ in (2.4) is said to be a quasi-polynomial with a single delay, like that in
(1.20). If k > 1, then (2.4) is a quasi-polynomial with multiple commensurate delays, as all its delays are
integer multiples of one � > 0. It should be emphasized that a single-delay system, like that in Fig. 2.1, can
potentially have a multiple-delay characteristic function, see Example 2.3 below. Yet such a function is
always of the commensurate type. In multiple-delay systems, like that in Remark 1.6, characteristic quasi-
polynomials can be of the form Q0.s/CP
i Qi.s/e��i s with at least one irrational �i1
=�i2. Such multiple-
delay quasi-polynomials are said to have incommensurate delays and their properties are normally way
messier than those of their commensurate counterparts.
Example 2.1. Consider an input-delay system as in (1.10) for the plant P.s/ D D C .sI � A/�1B . From
(1.22), the state-space realization
Ginpd.s/ D
2
4
A B 0
0 0 I
C D 0
3
5 ;
for which
�� .s/ D det
�sI � A �Be��s
0 I
�
D det.sI � A/:
This is a standard polynomial, agreeing with the discussion in ÷1.2.1. O
Example 2.2. If the reflectance in (1.19) is R.s/ D D C C.sI � A/�1B , then the state-space realization
of Gwave,0 in (1.23) is
Gwave,0.s/ D
2
4
A B 0
C D Zm
2C 2D 1
3
5 :
Its characteristic function
�� .s/ D det
�
sI � A �Be��s
�C 1 �De��s
�
The static R.s/ D �1 as in Fig. 1.5(b) results in the already familiar, from a discussion on p. 9, �� .s/ D1C e��s , which is a single-delay neutral quasi-polynomial. If R.s/ D �1=.2s C 1/, as in Fig. 1.5(a), then
�� .s/ D det
�s C 1=2 �e��s
1=2 1
�
D�
s C 1
2
�
C 1
2e��s;
22 CHAPTER 2. STABILITY ANALYSIS
which is a single-delay retarded quasi-polynomial. O
Example 2.3. Let
G.s/ D
2
666664
0 1 0 1 0
�q00 �q01 �q10 1 1
1 0 0 0 0
0 ˛ 0 0 0
1 0 0 0 0
3
777775
for some ˛ 2 R. The corresponding characteristic function is
�� .s/ D s2 C q01s C q00 C .�˛s C q10 C ˛q00/e��s C ˛q10e�2�s :
If ˛ ¤ 0, this is a multiple-delay commensurate quasi-polynomial. If ˛ D 0, �� .s/ has only one delay. In
fact, in the latter case
G.s/ D
2
4
1 0
0 0
0 1
3
5
2
664
0 1 0 1 0
�q00 �q01 �q10 1 1
1 0 0 0 0
1 0 0 0 0
3
775
and M D�
10
�
can be moved through the 2 � 2 delay element and then Fu.G; xD� / D Fu. QG; xD�/ for
QG.s/ D
2
664
0 1 0 1 0
�q00 �q01 �q10 1 1
1 0 0 0 0
1 0 0 0 0
3
775
2
4
1 0
0 0
0 1
3
5 D
2
664
0 1 0 0
�q00 �q01 �q10 1
1 0 0 0
1 0 0 0
3
775
and the 1 � 1 delay element xD� , which is a simpler setup (cf. Example 1.1). O
Example 2.4. If D´w D 0, then the corresponding characteristic function is always retarded, cf. (1.29).
Yet D´w ¤ 0 does not necessarily lead to a neutral �� .s/. To see this, consider
G.s/ D
2
664
0 �1 1 1
1 0 1 0
1 0 0 0
1 0 0 0
3
775;
whose D´w D�
0 10 0
�
¤ 0 (although it is nilpotent), but the characteristic function �� .s/ D s C e�2�s is
still retarded. O
It happens that relations between roots of (2.3), which are the solutions to �� .s/ D 0, and the stability
of the system in Fig. 2.1 are more delicate than those in the finite-dimensional case. Apart from difficulties
with finding them, it might happen that P� 62 H1 even if all roots of (2.3) do lie in the open left-half plane.
Still, there is a strong link between roots of (2.3) and stability in all practically important situations. To
understand that, some mathematical preliminaries on asymptotic properties of roots of (2.3) are required.
2.1.2 Asymptotic root properties
The quasi-polynomial �� .s/ in (2.4) is clearly an entire function of s, i.e. it is holomorphic in the whole
C. Hence, the set of roots of �� .s/ is countable and there are no accumulation points [63, Thm. 10.18].
This, in turn, implies that the number of roots of (2.4) in every bounded region of C if finite. Thus, there
are sequences of roots with unbounded magnitudes.
The asymptotic behavior of those sequences is understood rather well. For commensurate quasi-
polynomials of the form (2.4), which are either neutral or retarded, roots for large jsj are always organized
in a finite number of chains of two possible kinds (follow by Rouche’s arguments).
2.1. MODAL METHODS 23
Re s
Im s
lnj˛1j�
lnj˛2j�
lnj˛3j�
(a) Neutral chains, ˛i 2 f1=4; 1; 2g
Re s
Im s
(b) Asymptotic retarded chain, ˛1 D 13
Fig. 2.2: Asymptotic chains of roots of ��.s/
1. Neutral chains are asymptotic, under sufficiently large jsj, to the solutions of the equations e�s D ˛i
for some ˛i 2 C n f0g. Those solutions are at the sequence fsikgk2Z with
sik D lnj˛i j�
C j2�k C arg ˛i
�:
All these sik are on a vertical line, either in the left-half plane (if j˛i j < 1), or in the right-half plane
(if j˛i j > 1), or on the imaginary axis (if j˛i j D 1), see Fig. 2.2(a).
2. Retarded chains are asymptotic, under sufficiently large jsj, to the solutions of the equations se�s D ˛i
for some ˛i 2 C n f0g. Those solutions asymptotically approach [56, Lem. 6.1.2] the sequence
fsikgk2N with
sik D � ln.2�k/ � lnj�˛i j�
C j2�k C arg ˛i � �=2
�
for sufficiently large k, as well as their complex conjugates. The real parts of these sik approach �1irrespective of ˛i , see Fig. 2.2(b).
If (2.4) is retarded, i.e. if degQ0.s/ > degQi.s/ for all i 2 Z1::k , there there are only retarded chain(s).
Otherwise, there is at least one neutral chain and, possibly, also retarder chain(s). More details can be
found in [4, Ch. 12] or [56, Sec. 6.1], but digging into this issue is beyond the scope of these notes.
Remark 2.1 (incommensurate delays). The asymptotic roots behavior for quasi-polynomials with incom-
mensurate delays, i.e. of the form P.s/CP
i Qi.s/e��i s with at least one irrational ratio �i1
=�i2, is slightly
different. While retarded chains, if exist, are the same as in the commensurate case, neutral modes do
not approach infinity in a regular way. Nonetheless, they are asymptotically confined to the vertical strip
jRe sj � ˇ for some finite ˇ > 0. O
Once the qualitative picture of roots tendencies of ��.s/ for large jsj is clear, neutral chains can be
quantified. To this end, note that sI �A is nonsingular whenever jsj is large enough (whenever jsj > �.A/,as a matter of fact). Hence, (2.3) reads, using (A.5a),
�� .s/ D det.sI � A/ det.I �D´we��s � C´.sI � A/�1Bwe��s/
and only roots of the second factor are of interest. At every neutral chain we have that jsj ! 1, whereas
je��sj is bounded. Hence, the term C´.sI � A/�1Bwe��s vanishes and we end up with
det.I �D´we��s/ D 0 ” det.e�sI �D´w/ D 0:
Thus, all neutral chains are along the vertical lines Im s D lnj�i j=� for distinct nonzero eigenvalues �i of
D´w . There are no neutral chains iff D´w is nilpotent, which agrees with Example 2.4.
24 CHAPTER 2. STABILITY ANALYSIS
2.1.3 Stability and roots of characteristic function
An important outcome of the asymptotic analysis above for retarded quasi-polynomials of the form (2.4)
is that only a finite number of their roots can be located in any right-half plane Cˇ . This suggests that the
effect of those roots on properties of the transfer function P�.s/ in the right-hand plane C0 is essentially
the same as in the case of rational transfer functions.
The neutral case under �.D´w/ < 1 is similar from the stability point of view. Namely, in this situation
neutral chains converge only along vertical lines lying in the open left-half plane C n xC0 and there could
be at most a finite number of roots of in Cln.�.D´w//, i.e. neutral asymptotic chains are separated from the
imaginary axis (because ln.�.D´w// < 0). This leads to the following important result:
Proposition 2.1. If �.D´w/ < 1, then P� .s/ given by (1.28) belongs to H1 iff the characteristic function
(2.3) has no roots in xC0.
Proof (outline). Sufficiency follows by the observation, already made at the end of ÷2.1.3, that
˚.s/ ´�
sI � A �Bwe��s
�C´ I �D´we��s
��1
is bounded in fs j s 2 xC0 and jsj ! 1g whenever �.D´w/ < 1, see proofs in [5, Sec. 3]. Hence, ˚ 2 H1and so does P� . Necessity assumes implicitly that there is no redundancy in xD� in Fig. 2.1.
The case of �.D´w/ > 1 is technically simple and the conclusion is also quite in line with our finite-
dimensional intuitions. Namely, there is clearly an infinite number of roots of ��.s/ in xC0. Hence, P�.s/
is not holomorphic in C0 and P� 62 H1, so the system is unstable. We thus have the following result:
Proposition 2.2. If �.D´w/ > 1, then P� 62 H1.
The most delicate situation takes place if �.D´w/ D 1. This is the case where our finite-dimensional
insight might betray us, as demonstrated by the example below.
Example 2.5. Consider the system in Fig. 2.1 for � D 1 and
G.s/ D�
�s=.s C 1/ �s=.s C 1/
1=.s C 1/ 1=.s C 1/
�
D
2
4
�1 1 1
1 �1 �11 0 0
3
5 H) P�.s/ D 1
s C 1C se�s
(this P�.s/ is proper, i.e. bounded on C˛ for a sufficiently large ˛). Its characteristic function
�� .s/ D det
�
s C 1 �e�s
�1 1C e�s
�
D s C 1C se�s
is a single-delay neutral quasi-polynomial. Clearly, �� .s/ D 0 iff s satisfies �e�s D 1C 1=s. Assuming
that s D � C j! is a root of this �� .s/, the magnitude equality je�sj D j1C 1=sj can be expressed as
e�� Dˇˇˇˇ1C 1
� C j!
ˇˇˇˇ
Dˇˇˇˇ
�
1C �
�2 C !2
�
� j!
�2 C !2
ˇˇˇˇ
�ˇˇˇˇ1C �
�2 C !2
ˇˇˇˇ:
First, let � D 0. The first equality above reads then 1 D 1 C 1=j!j, which holds for none ! 2 R. Now,
let � > 0. In this case the inequality above leads to e�� > 1, which is again impossible. Thus, P�.s/ has
no poles in the closed RHP. Nevertheless, this P� 62 H1. To see this, let fsig 2 C n xC0 be a sequence of
roots of �� .s/ such that
si C 1C sie�si D 0; with lim
i!1jsi j D 1;
2.1. MODAL METHODS 25
which always exists as we already know. Define then another sequence, fQsig 2 C0, where Qsi D �si and
thus limi!1jQsi j D 1 as well. The values of P�.s/ at each Qsi are
P� .Qsi/ D 1
1 � si � si esiD 1
1 � si C s2i =.1C si /
D 1C si :
Thus, we have a sequence in C0 at which limi!1jP�.Qsi/j D 1. Thus, P�.s/ is not bounded in C0. O
Thus, if �.D´w/ D 1, it might happen that a system having no poles in xC0 is nevertheless unstable,
which has no counterparts for finite-dimensional systems. To muddle even more, the transfer function
1
.s C 1/.s C 1C se�s/;
which has the same poles as the transfer function in Example 2.5, plus an additional pole at s D �1,
does belong to H1, see [57], and is thus stable. Moreover, there are systems with �.D´w/ D 1 that have
infinitely many roots in C0.
It is possible to characterize all these situations, which is intricate and might not be quite important
from the engineering point of view. This is because of a simple observation that even if a system with
�.D´w/ D 1 is stable, this property is extremely fragile and thus impractical. Indeed, a very small pertur-
bation to D´w can render �.D´w/ > 1 and the system unstable, with an infinite number of poles in C0.
For that reason, we regard delay systems with �.D´w/ D 1 as practically unstable, which simplifies the
analysis and does not entail any loss of engineering generality.
Remark 2.2 (regaining the link). The discussion above effectively says that the connection between stabil-
ity and roots of the characteristic function is conventional for the class of systems of interest, those with
�.D´w/ < 1. Still, we do not have this connection for a general D´w anymore. But it can be regained with
a slight modification of the stability area in the complex plane. Namely, the system is stable whenever
there is � > 0 such that all roots of �� .s/ lie in CnC��. The price to pay for this modification is the loss of
a clear intuition for transfer functions along this new stability boundary, ��C jR. Because the problematic
case of �.D´w/ D 1 is ruled out anyway, we proceed with the standard definition. O
2.1.4 Nyquist stability criterion
The Nyquist criterion is an elegant way to count the number of unstable closed-loop poles using the
knowledge of the number of unstable open-loop poles and the behavior of the polar plot of the loop
frequency response. Mathematically, the criterion is an application of Cauchy’s argument principle to the
open-loop transfer function with respect to the Nyquist contour. The latter comprises a semicircular arc
connecting Cjr and �jr in the clockwise direction, i.e. r cos �Cjr sin � for � 2 Œ��; ��, for r ! 1, which
is closed along the imaginary axis (encircling pure imaginary poles of the loop transfer function from the
right, if present). A key factor enabling stability analysis with this approach is that all unstable poles of
both open- and closed-loop systems lie inside the Nyquist contour. The criterion also complements well
the loop shaping design method, because the loop transfer function on the Nyquist contour is merely its
frequency response, with values at the semicircular arc collapsing to a point in the finite-dimensional case.
Projecting these ideas to the system in Fig. 2.1 in the case when G´w is SISO, we may consider
analyzing its unstable poles on the basis of the polar plot of the “loop gain”
L�.s/ ´ G´w.s/e��s : (2.5)
A clear advantage here is that G´w is finite dimensional, i.e. standard, and effects of the delay on the
frequency response of L� are simple and intuitive, as discussed in ÷1.2.1 on p. 5. Certain care should be
26 CHAPTER 2. STABILITY ANALYSIS
-900 -720 -540 -360 -180 0
-10
0
12.1
Fig. 2.3: Nichols plots of the loop frequency response of (2.6)
taken to analyze poles of P�.s/ properly. This is because the Nyquist analysis of L� is only concerned with
the poles of .I �L�/�1 D .I �G´w
xD� /�1, whereas P� D Gyu CGyw
xD� .I�G´wxD�/
�1G´u might contain
additional modes, those of Gyu, Gyw , and G´u, or might have some modes canceled. Still, accounting for
those potential additions / cancellations is not a challenge because G is finite dimensional. A simplest
way to do that is to use a potentially nonminimal form of G´w.s/, with all poles of G.s/ included into it.
In fact, if such canceled poles are unstable, the system is unstable for all � . If these poles are stable, they
have no effect on the stability of P� .
So, let G´w be SISO and assume that jG´w.1/j < 1 (otherwise the system is unstable anyway). With
the assumption above, we know that there may be only a finite number of “closed-loop” poles in xC0.
Hence, the Nyquist contour still encircles all possible unstable poles and can be used as is. The only
subtle point here is the values of G´w.s/ at the semicircular arc of the contour, whose radius approaches
infinity. If G´w.s/ is strictly proper, then G´w.s/ is zero at all its parts, exactly as in the finite-dimensional
case. But if G´w.1/ D ¤ 0 for some j j < 1, then
L� .r cos � C jr sin �/r!1���! e�� r cos � e�j� r sin �
is no longer a point. Rather, it is a curve connecting ejr and e�jr through the origin and remaining
within the closed disk j j xD � D. Although this behavior deviates from that in the finite-dimensional case,
it does not affect the stability analysis, as the connecting curve remains within the open unit disk D. Thus,
the Nyquist stability criterion extends to delay systems in Fig. 2.1 with jG´w.1/j < 1 verbatim.
Example 2.6. To illustrate the use of the Nyquist criterion in the context of the stability analysis of the
delay system in Fig. 2.1 and insights one can gain from it, consider the system with
G.s/ D
2
664
0 1 0 0
�q00 �q01 �q10 1
1 0 0 0
1 0 0 0
3
775
D 1
s2 C q01s C q00
�
�q10 1
�q10 1
�
; (2.6)
which is the system studied in Example 2.3 on p. 22 (under ˛ D 0). It is readily seen that poles of
P�.s/ coincide with those of 1=.1 � L�/ in this case. Let q00 D 1, q01 D 0:1, and q10 D 0:4. For this
choice of the parameters the system G´w is itself stable, so the system is stable iff the Nyquist plot of L�
does not encircle the critical point. It might be easier to analyze the behavior of L�.j!/ on the Nichols
chart, because it is cleaner than the Nyquist plot for systems with large phase lags and several crossover
frequencies. The Nichols plots of the studied system for several values of � are presented in Fig. 2.3. We
can readily conclude that P� is stable for � D 0 and � D 4 (solid lines) and unstable for � D 2, � D 8, and
� D 11 (dashed lines). As a matter of fact, if � D 2 or � D 8, there are 2 unstable poles of P� .s/, while if
� D 11, there are 4 of them.
2.1. MODAL METHODS 27
Because the impact of � on L�.j!/ is uncomplicated—only the phase is affected by � , linearly—
Nyquist arguments offer a valuable insight into the effect of xD� on the stability of the system. Indeed, with
G from (2.6), L� .j!/ has two crossover frequencies, i.e. the frequencies at which jL� .j!/j D jG´w.j!/j D0 [dB]. They are !c;1 � 0:7795 and !c;2 � 1:1757 (rad/s), denoted by round and square markers, respec-
tively, on Fig. 2.3. Poles of P�.s/ cross the imaginary axis when the Nichols plot of L�.s/ crosses any of
the critical points .�180� 360k; 0/, k 2 Z. Because the delay does not affect the magnitude of L�.j!/, a
critical point can only be crossed at either of the crossover frequencies. Being stable at � D 0, the system
clearly remains stable for sufficiently small � > 0. But as � becomes sufficiently large, the plot crosses the
critical point .�180; 0/ at ! D !c;2 and the system loses stability, see the dashed line for � D 2 in Fig. 2.3.
Then, as � grows further, the second point crosses that critical point, this time at ! D !c;1. It happens
that at this very � the point L�.j!c;2/ is still to the right of the next critical point, .�540; 0/. Hence, as
soon as L�.j!c;1/ passes .�180; 0/, the system becomes stable again, cf. the solid line for � D 4. As �
keeps growing, L�.j!c;2/ eventually crosses .�540; 0/ at which point stability is lost again, see the dashed
line for � D 8. And the system never becomes stable again after that. To understand the reason, note that
the angular distance between L� .j!c;1/ and L� .j!c;2/ grows with � , just because the phase lag due to the
delay element is proportional to the frequency and !c;2 > !c;1. Hence, there is a delay � D �? such that
argL�?.j!c;2/ D argL�?
.j!c;1/ � 2� ” �? D 2� C argG´w.j!c;2/ � argG´w.j!c;1/
!c;2 � !c;1
(in our case �? � 9:1775) and the angular distance between L� .j!c;1/ and L�.j!c;2/ exceeds 360ı for all
� > �?. Thus, there is at least one critical point (and for larger delays even more, like for the pale dashed
curve in Fig. 2.3 corresponding to � D 11) between them for all such delays. O
The arguments above apply to a general situation. Although details might be knotty and stability
conditions are not readily formulated for every possible frequency-response curve, some qualitative con-
clusions can be drawn. In particular, it is not hard to convince oneself that the following properties hold:
� if there are no nonzero crossover frequencies, then stability / instability property of the zero-delay
system holds for all � > 0 (delay-independent condition);
� stability / instability of the zero-delay system is preserved for “sufficiently small” � > 0;
� unless there are no nonzero crossover frequencies, there is a finite �? such that the system is unstable
for all � > �?.
This said, the Nyquist criterion might still not the the most convenient tool for computing intervals of �
for which the system is stable. Direct analyses of characteristic quasi-polynomials studied in the next two
subsections, might be advantageous in this respect.
2.1.5 Delay sweeping (direct method of Walton–Marshall)
A fundamental property of quasi-polynomials ��.s/ as in (2.3) under �.D´w/ < 1 is that the rightmost
frontier of their roots, i.e. sup�� .s/D0 Re s, is a continuous function of � for all � � 0, see [10, Thms. 2.2
and 2.3]. Consequently, as � changes in RC, the roots might migrate from the left-half plane C n xC0 to
the right-half plane C0 or vice versa only through the imaginary axis. Thus, we can start with counting
unstable roots of the zero-delay version of it, �0.s/, which is a standard polynomial analysis problem, and
then count crossings the imaginary axis by the roots of �� .s/ as � sweeps the whole positive semi-axis.
Each time roots cross from left to right, which is referred to as a switch, we add to the count of unstable
poles. If roots cross from right to left, a reversal, we subtract that count. Walton and Marshall in [75]
showed that this count can be carried out efficiently, using only finite polynomial analysis and most of its
steps depend only on properties of Qi.s/ and are independent of � .
28 CHAPTER 2. STABILITY ANALYSIS
Single-delay quasi-polynomials
Let us start with the single-delay version of (2.4), which is the quasi-polynomial
��.s/ D Q0.s/CQ1.s/e��s; (2.7)
where the real polynomials Q0.s/ and Q1.s/ satisfy the following assumptions:
A1: degQ0.s/ � degQ1.s/,
A2: jlims!1Q1.s/=Q0.s/j < 1,
A3: Q0.s/ and Q1.s/ have no common roots in xC0,
A4: Q0.0/CQ1.0/ ¤ 0.
Assumption A1 just says that �� .s/ is either neutral or retarded, while A2 guarantees that there are no
neutral root chains in xC0. Common unstable roots of Q0.s/ and Q1.s/ are roots of �� .s/ regardless of � .
If A4 does not hold, then �� .s/ has an (unstable) root at the origin for all � . Thus, A1–4 do not impose
any loss of generality, if they do not hold, then the stability analysis is futile.
By the logic of the method, we now analyze pure imaginary roots of �� .s/ for � > 0. In other words,
we seek for ! 2 R that solve
Q0.j!/CQ1.j!/e�j�! D 0 ” ej�! D �Q1.j!/
Q0.j!/:
The latter equation, in turn, is equivalent to the following two equalities:
jQ0.j!/j D jQ1.j!/j (2.8a)
(Q0.j!/ and Q1.j!/ cannot vanish simultaneously by A3), known as the magnitude relation, and
�! D argQ1.j!/ � argQ0.j!/C .2k � 1/� for some k 2 Z; (2.8b)
dubbed the phase relation. Three observations on potential solutions of (2.8) in ! and � are in order. First,
if ! D 0 solves the gain relation (2.8a), then the phase relation (2.8b) is not valid because of A4. This
agrees with the intuition that roots cannot cross the imaginary axis at the origin, � has no effect on �� .s/
there. Second, if ! D !i ¤ 0 solves (2.8a), then so does ! D �!i and every � solving (2.8b) for !i does
that for �!i too. Therefore, roots always migrate in pairs. Third, if ! D !i > 0 solves (2.8a), then (2.8b)
is solved by all positive
�ik D �i0 C 2�
!i
k; k 2 ZC; (2.9)
where �i0 is the smallest nonnegative solution of the phase relation for !i . Thus, to find pure imaginary
roots of �� .s/we only need to find all positive solutions of the magnitude relation (2.8a), which are known
as crossing frequencies. It is readily seen that these frequencies are the positive roots of
�.!/ D Q0.j!/Q0.�j!/ �Q1.j!/Q1.�j!/; (2.10)
which is an even polynomial whose leading coefficient is positive, by A1,2, and whose effective degree,
i.e. the degree with respect to !2, equals that of Q0.s/. If �.!/ has no positive real roots, no imaginary
axis crossings can take place and stability properties are delay independent. If (2.10) is solvable, then
there are an infinite number of crossings as � increases. But if roots of (2.7) do cross jR as � grows, it
happens only at a finite number of points, independent of � . This is an important property, which facilitates
the characterization of stability regions in terms of � .
Having found crossing frequencies, the next step is to understand directions of those crossings. The
following result clarifies this issue:
2.1. MODAL METHODS 29
Lemma 2.3. Let !i > 0 be a solution of �.!i/ D 0. Each pure imaginary point j!i is
1. a switch iff �.!/ changes its sign from minus to plus at ! D !i as ! increases,
2. a reversal iff �.!/ changes its sign from plus to minus at ! D !i as ! increases,
3. a tangential point, at which no roots of �� .s/ cross the imaginary axis, otherwise.
In particular, the point is a switch (reversal) if d�.!/=d!j!D!i> 0 (d�.!/=d!j!D!i
< 0).
Proof. Crossing directions are determined by the function
�.!/ D sign Reds
d�
ˇˇˇ
sDj!:
Namely, we have a switch at !i if �.!i/ > 0, a reversal if �.!i/ < 0, and an uncertain situation if
�.!i/ D 0, where higher derivatives are required. Thus, our first goal is to calculate �.!/ at the imaginary
roots of �� .s/. To this end, differentiate the characteristic function by � along any of its roots:
0 D d�� .s/
d�D dQ0.s/
ds
ds
d�C dQ1.s/
ds
ds
d�e��s �Q1.s/e
��s
�
�ds
d�C s
�
:
Hence,
ds
d�D sQ1.s/e
��s
Q00.s/CQ0
1.s/e��s � �Q1.s/e��s
D �s�Q0
0.s/
Q0.s/� Q0
1.s/
Q1.s/C �
��1
;
where equality Q1.s/e��s D �Q0.s/ was used and .�/0 stands for the differentiation with respect to s.
Because sign Re´�1 D sign Re´,
�.!/ D sign Re
�
� 1
j!
�Q0
0.j!/
Q0.j!/� Q0
1.j!/
Q1.j!/C �
��
D sign Re
�j
!
�Q0
0.j!/
Q0.j!/� Q0
1.j!/
Q1.j!/
��
(because Re�
j!D 0)
D sign Re
�
j
�Q0
0.j!/
Q0.j!/� Q0
1.j!/
Q1.j!/
��
(because ! > 0)
Multiplying expression under “sign” by Q0.j!/Q0.�j!/ D Q1.j!/Q1.�j!/ > 0 we get:
�.!/ D sign Re
�
j
�dQ0.j!/
d.j!/Q0.�j!/ � dQ1.j!/
d.j!/Q1.�j!/
��
D sign Re
�dQ0.j!/
d!Q0.�j!/ � dQ1.j!/
d!Q1.�j!/
�
:
Because sign Re´ D sign.´C ´/,
�.!/ D sign
�dQ0.j!/
d!Q0.�j!/ � dQ1.j!/
d!Q1.�j!/C dQ0.�j!/
d!Q0.j!/ � dQ1.�j!/
d!Q1.j!/
�
D signd�.!/
d!:
This proves the last statement.
To complete the proof we need to address the singular case of �.!i/ D 0. Its handling is quite technical
and can be found in [75, ÷6.2].
30 CHAPTER 2. STABILITY ANALYSIS
�.!
/
!!1
switch
!2
reversal
!3
tangential point
!4
switch
!5
reversal
Fig. 2.4: Qualitative sketch of �.!/
Remarkably, not only crossing frequencies, but also crossing directions are independent of � . More-
over, because �.!/ is even and its leading coefficient is positive (by A2), lim!!1 �.!/ D C1. This
means that the last, at the highest !, cross of the zero level by it is always a point where the sign changes
from minus to plus. Hence, the highest non-tangential crossing frequency is always a switch. Also, switch
and reversal frequencies always alternate, see Fig. 2.4 for a qualitative picture.
Having understood crossing directions at each crossing frequency, it is time to return to the delay
chains in (2.9), which solve the phase relation (2.8b). Each crossing frequency determines such a chain
and each chain is stuck with the same crossing direction. All we need now is to order all crossing delays
as an ascendant sequence f�j gj 2ZCwith �0 D 0, still marked by their crossing direction, and start counting
the number of C0 roots of ��.s/, say � 2 ZC, as � grows from � D 0 up. At each �j the counter � changes,
so we end up with a sequence f�j gj 2ZC. Its starting point is the zero-delay �0 and �j at each j 2 N are
updated as
�j D
8
<
:
�j �1 C 2 if �j corresponds to a switch
�j �1 � 2 if �j corresponds to a reversal
�j �1 if �j corresponds to a tangential point
(2.11)
(the counter changes by 2 as roots migrate at both j!j and �j!j ). The system is then stable in each interval
� 2 .�j ; �j C1/with �j D 0. There number of such intervals might be arbitrary and the first stable interval is
not necessarily Œ0; �1/, meaning that the delay might have a stabilizing effect. Anyhow, unless all crossing
frequencies are tangential points, there is always a finite j? 2 ZC such that �j > 0 for all j � j?, i.e. the
counting may stop at a finite j . This is because the highest non-tangential crossing frequency !l is always
a switch, so the distance between two successive switch delays in that chain, �l;kC1 � �lk D 2�=!l , is
always smaller than that between any possible reversal delays. Hence, switches accumulate faster and the
sequence f�j g is unbounded.
Example 2.7. To illustrate this procedure, consider again the system from (2.6) for q00 D 1, q01 D 0:1,
and some q10 > 0. The zero-delay characteristic polynomial is �0.s/ D s2 C 0:1s C 1C q10 in this case
and it is Hurwitz, i.e. �0 D 0. The polynomial
�.!/ D .�!2 C j0:1! C 1/.�!2 � j0:1! C 1/ � q210 D !4 � 2 � 0:995!2 C 1 � q2
10
and P�.!/ D 4!.!2 � 0:995/ is zero only if !2 D 0:995. Four scenarios are possible, depending on q10.
1. If 0 < q10 <p1� 0:9952 � 0:0999, there are no real roots of �.!/. Hence, taking into account that
�0 D 0, the system is stable for all � � 0, i.e. it is delay-independent stable.
2. If q10 Dp1� 0:9952, there is a double positive real roots of �.!/,
!21;2 D 0:995:
Because �.!/ D .!2 �0:995/2 � 0 for this q10, �.!/ does not change its sign at crossing frequencies
and we have a tangential point. The delays at which the roots of �� .s/ are on the imaginary axis can
2.1. MODAL METHODS 31
be determined from the phase relation (2.8b). To this end, note that
argQ1.j!/ � argQ0.j!/ D arg q10 � arg.1 � !2 C j0:1!/ D(
arctan 0:1!!2�1
if 0 � ! � 1
arctan 0:1!!2�1
� � if ! � 1
Because !1;2 < 1, (2.8b) reads
�!1;2 D arctan0:1!1;2
!21;2 � 1 C .2k � 1/� � �1:5207C .2k � 1/�
and crossing delays are positive iff k 2 N. Thus, the system is stable iff
� ¤ 1:62495C 6:31476k; k 2 ZC
3. Ifp1 � 0:9952 < q10 < 1, there are two positive real roots of �.!/,
!21 D 0:995C
q
0:9952 � 1C q210 and !2
2 D 0:995�q
0:9952 � 1C q210;
the first of which is a switch and the second is a reversal. To illustrate properties of the system,
select q10 D 0:4, like in the Nyquist analysis of Example 2.6. With this choice !1 � 1:1757 and
!2 � 0:7795 and the phase relations read
�1k � 0:2537C 5:3441k D f0:2537; 5:5978; 10:9419; 16:286; 21:6301; 26:9742; : : :g
for switches and
�2k � 3:7785C 8:0602k D f3:7785; 11:8387; 19:8989; 27:9591; 36:0193; : : :g
for reversals. The sequence of all crossing delays is then (gray elements mark reversals)
f�j g D f0; 0:2537; 3:7785; 5:5978; 10:9419; 11:8387; 16:286; 19:8989; 21:6301; 26:9742; 27:9591; : : :g:
Hence, the counter of unstable poles reads f�j g D f0; 2; 0; 2; 4; 2; 4; 2; 4; 6; 4; : : :g in this case. At
j D 3 and j D 4 we have two successive switches (and, expectably, never two successive reversals).
Thus, we may stop counting after j D 4 and conclude the system is stable iff
� 2 Œ0; 0:2537/[ .3:7785; 5:5978/;
which agrees with what we have via the Nichols chart in Fig. 2.3.
4. If q10 � 1, there is one positive real root of �.!/,
!21 D 0:995C
q
0:9952 � 1C q210 � 1:99
and it is a switch, of course. This implies that the system is stable only until the first crossing delay,
�10. The phase relation (2.8b) reads now (mind that !1 > 1)
�!1 D arctan0:1!1
!21 � 1
� � C .2k � 1/� D arctan0:1!1
!21 � 1
C 2.k � 1/�;
so its minimum positive solution corresponds to k D 1. Hence, the system is stable iff
0 � � <1
!1
arctan0:1!1
!21 � 1
; where !21 D 0:995C
q
q210 � 0:009975
which is a decreasing function of q10 � 1, as a matter of fact.
32 CHAPTER 2. STABILITY ANALYSIS
Ugh, that was long, but nevertheless quite straightforward . . . O
Remark 2.3 (necessary condition). An interesting, and quite useful, observation from the developments
above is that roots can cross the imaginary axis only in pairs. Therefore, if �0.s/ has an odd number of
unstable poles, the system is delay-independent unstable. Using Vieta’s formulae, this observation leads
to a simple necessary condition for the existence of � for which all roots of �� .s/ are in C n xC0. Namely,
if �0.s/ D sn C �0;n�1sn�1 C � � � C �01s C �00, then there is no � � 0 for which all roots of �� .s/ are
stable whenever �00 � 0. The case of �00 < 0 is exactly the condition for �0.s/ to have an odd number of
unstable roots. The case of �00 D 0 corresponds to the situation when �� .0/ D 0 for all � . O
Remark 2.4 (Nyquist connections). Comparing the results of the third scenario of Example 2.7 with those
in ÷2.1.4, we can see that crossing frequencies of the former match the crossover frequencies of the latter.
This is not a coincidence and is a general property, just because Q1.s/=Q0.s/ is exactly the “open-loop”
transfer function G´w.s/. In the same vein, the quantities �i0 in (2.9) can be interpreted as delay margins of
the loop L� from (2.5). Still, there are qualitative differences. For instance, while the Nyquist arguments
are based on the number of unstable roots of Q0.s/, the delay sweeping method uses unstable roots of
Q0.s/CQ1.s/ as its starting point. It might be that this very difference renders the direct method easier
to “algoritmize” than the Nyquist criterion. At the same time, the Nyquist criterion arguably offers more
insight into properties of the system and is more visual. O
Multiple-delay quasi-polynomials
Arguments above extend to general commensurate quasi-polynomials of form (2.4), although technicali-
ties become rather cumbersome. To provide a flavor of the approach and associate complications, consider
the two-delay version of the problem for
��.s/ D Q0.s/CQ1.s/e��s CQ2.s/e
�2�s ;
such that counterparts of A1–4 hold (i.e. no neutral chains in xC0, no common unstable roots of all Qi.s/,
andP
i Qi.0/ ¤ 0). The underlying idea is still to count imaginary crossings of roots of this �� .s/ as �
increases from � D 0. Because �� .s/ is real, its pure imaginary roots coincide with those of
e�2�s�� .�s/ D Q2.�s/CQ1.�s/e��s CQ0.�s/e�2�s:
But then, every imaginary root of �� .s/ is also a root of yet another quasi-polynomial
Q�� .s/ ´ Q0.�s/��.s/ �Q2.s/e�2�s�� .�s/
D Q0.�s/Q0.s/ �Q2.�s/Q2.s/C�
Q0.�s/Q1.s/ �Q1.�s/Q2.s/�
e��s;
which is a single-delay quasi-polynomial of the form (2.7) and whose imaginary roots we can analyze.
The catch is that Q�� .s/ might have additional imaginary roots, which are an artifact of the procedure. To
understand these additional roots, consider the magnitude equality part of the equation Q�� .j!/ D 0, which
is
.jQ0.j!/j � jQ2.j!/j/j�� .j!/j D 0:
This equality holds if either j�� .j!/j D 0, which is what we are interested in, or jQ0.j!/j D jQ2.j!/j,which is an artifact in potentia (it might not be, there is also a phase relation to satisfy). Thus, all positive
frequencies at which the magnitudes of Q0 and Q2 coincide should be checked separately. There might
be only a finite number of such frequencies (otherwise, we necessarily have Q0.s/ D Q2.s/ and a neutral
chain on the imaginary axis, which contradicts our assumptions). Assuming that jQ0.j!i/j ¤ jQ2.j!i/jat all imaginary roots j!i of �� .s/, it can be shown [75, Sec. 6.3] that the crossing directions of Q�� .s/
and �� .s/ coincide iff jQ0.j!i/j > jQ2.j!i/j and are opposite iff jQ0.j!i/j < jQ2.j!i /j at each crossing
frequency !i .
2.1. MODAL METHODS 33
Example 2.8. Let
�� .s/ D s C e��s C e�2�s;
for which �0.s/ D s C 2 is Hurwitz. The only positive frequency at which jQ0.j!/j D jQ2.j!/j is ! D 1
then. Now,
Q��.s/ D �s2 � 1� .s C 1/e��s
and the corresponding Q�.!/ D .1�!2/2 � .1C!2/ D !2.!2 � 3/ has one admissible solution, !1 Dp3.
As this !1 ¤ 1, there are no artifact crossing frequencies. All crossings of Q�� .s/ at this frequency are
switches and, because jQ0.j!1/j Dp3 > 1 D jQ2.j!1/j, so are the crossings of �� .s/. To find the
smallest positive � at which this switch takes place, consider the phase relation for Q�� .s/, which is
�!1 D arg.�1� j!1/ � arg.!21 � 1/C .2k � 1/� D arctan!1 C 2�k D �
3C 2�k:
Its smallest positive solution is � D �=.3p3/ � 0:6046. Hence,
0 � � <�
3p3
is the range of delays for which this �� .s/ has no unstable roots. O
The procedure outlined above applies to general quasi-polynomials of the form (2.3) recursively, each
step reducing one delay. It should be clear that even in a general commensurate case crossing frequencies
and root directions are independent of the delay, which is a useful insight. Computational details are rather
dull though. Also, dimensions of involved polynomials grow rapidly in those iterations, so the approach
might not be quite practical for quasi-polynomials of the form (2.4) with a large k.
Remark 2.5 (incommensurate delays). The situation is way more complicated for quasi-polynomials with
incommensurate delays. Roots crossings there no longer happen at a finite number of frequencies, so the
analysis is a nightmare even for quasi-polynomials with 2 delays and constant Q1.s/ and Q2.s/. O
2.1.6 Bilinear (Rekasius) transformation
An alternative take on the idea of checking crossings of the imaginary axis by roots of ��.s/ as � sweeps
the whole positive semi-axis was proposed by Rekasius in [61]. A key observation is that at each frequency
! 2 R, the locus of e�j�! in the complex plane as � sweeps RC (which is the unit circle T , as a matter of
fact) coincides with that of xR˛.j!/, where
xR˛.s/ ´ �s C ˛
s C ˛; (2.12)
as ˛ sweeps the whole real axis R. Moreover, for every !i 2 R n f0g and �i 2 RC there is ˛i 2 R, viz.
˛i D !i cot�i!i
2; (2.13)
such that the equality e�j�i !i D xR˛i.j!i/ holds.
These observations suggest that frequency-response properties of the delay systems in Fig. 2.1 can be
studied in terms of those of the finite-dimensional system in Fig. 2.5, which is obtained by the substitutionxD� ! xR˛ . A state-space realization of this finite-dimensional system can be derived similarly to that of
the delay system (1.27). The only difference is the replacement of the relation w.t/ D ´.t � �/ with
(
PxR.t / D �˛xR.t /C ´.t/
w.t/ D 2˛xR.t / � ´.t/
34 CHAPTER 2. STABILITY ANALYSIS
�
G´w G´u
Gyw Gyu
�
xR˛
w´
y u
Fig. 2.5: General interconnection with Rekasius substitution
which represents the Laplace-domain relation W.s/ D .�s C ˛/=.s C ˛/Z.s/. Eliminating ´ and w from
the model we then end up with the transfer function of P� W u 7! y in terms of its state-space realization
P�.s/ D
2
4
A � BwR´wC´ 2˛BwR´w Bu � BwR´wD´u
R´wC´ ˛.D´w � I /R´w R´wD´u
Cy �DywR´wC´ 2˛DywR´w Dyu �DywR´wD´u
3
5 ; (2.14)
where R´w ´ .I CD´w/�1 is well defined if we assume that �.D´w/ < 1. The corresponding character-
istic function is
�˛.s/ ´ det
�
sI � AC BwR´wC´ 2˛BwR´w
R´wC´ sI C ˛.I �D´w /R´w
�
D N�k
X
iD0
.s C ˛/k�i.˛ � s/iQi.s/; (2.15)
where the last expression follows from (2.4) for N� ¤ 0 such that �˛.s/ is monic. Nonzero pure imaginary
roots of �� .s/ in (2.4) coincide then with those of �˛.s/ in (2.13). Moreover, crossing directions of such
roots of �˛.s/ are closely related to those of �� .s/, as established by the result below.
Lemma 2.4. Let s� and s˛ be roots of �� .s/ and �˛.s/, respectively. If j!i ¤ 0 is a root of both these
functions, then
sign Reds�
d�
ˇˇˇsDj!i
D � sign Reds˛
d˛
ˇˇˇsDj!i
:
Proof. Consider the equation
det
�
sI � A �Bw�
�C´ I �D´w�
�
D 0
and let �.s/ denote its solution. Clearly,
e��s� D �.s� / and�s˛ C ˛
s˛ C ˛D �.s˛/:
Thus,d
d�e��s� D �e��s�
�
s� C �ds�
d�
�
D �0.s�/ds�
d�
andd
d˛
�s˛ C ˛
s˛ C ˛D 2
.s˛ C ˛/2
�
s˛ � ˛ds˛
d˛
�
D �0.s˛/ds˛
d˛;
where �0.s/ ´ d�.s/=ds. Hence,
�e��s�
�
s�
�ds�
d�
��1
C �
�
D 2
.s˛ C ˛/2
�
s˛
�ds˛
d˛
��1
� ˛
�
whenever s� D s˛ . In particular, at s� D s˛ D j!i ¤ 0 we have that e�j�!i D .�j!i C ˛/=.j!i C ˛/ and
� C j!i
�ds�
d�
��1
D 2
!2i C ˛2
�
˛ � j!i
�ds˛
d˛
��1�
2.1. MODAL METHODS 35
�
G´w G´u
Gyw Gyu
�
�
�Gint Gint
2I �I
�
˛I
y u
QwQ
Fig. 2.6: Equivalent form of the system in Fig. 2.5 with isolated ˛, here Gint.s/ D 1=s � I
or, equivalently,�
ds�
d�
��1
D � 2
!2i C ˛2
�ds˛
d˛
��1
C j1
!i
�
� � 2˛
!2i C ˛2
�
:
The result then follows by the fact that sign Re ´ D sign Re´�1.
Thus, we can analyze imaginary crossings of ��.s/ via those of �˛.s/. A good news is that this �˛.s/
is a standard polynomial, rather than a quasi-polynomial from (2.4). A bad news is that the parameters of
�˛.s/ are functions of ˛, so the analysis has to be carried out in parametric form. This can be done via
the Routh–Hurwitz stability criterion, although handling singular cases there might be a mess. Arguably,
the delay sweeping method is more streamlined and easier to use, especially in the analysis of various
low-order applications, see Section 3.1. Still, the bilinear transformation may also be helpful in analyzing
properties of delay systems, examples can be found in ÷6.1.1.
Example 2.9. Return to the system with G as in (2.6). Assume that q00 D 1, q01 D 0:1, and q10 D 0:4,
like in the third item of Example 2.7. The counterpart of �� .s/ D s2 C 0:1s C 1C 0:4e��s in this case is
�˛.s/ D s3 C .˛ C 0:1/s2 C 0:1.˛ C 6/s C 1:4˛:
We can construct its Routh array, of course, but for third-order polynomials the condition of having a pair
of nonzero pure imaginary roots is
1:4˛ D 0:1.˛ C 6/.˛ C 0:1/ H) ˛1;2 D 7:9˙p60:01
2:
The corresponding imaginary roots (with positive frequencies) are then
s1;2 D jp
0:1.˛ C 6/j˛D˛1;2;
which yields crossing frequencies !1 � 1:1757 and !2 � 0:7795, exactly as in Example 2.7. Crossing
directions can be found by constructing the corresponding root locus plot. But we already know, from the
analysis in ÷2.1.5, that the highest crossing frequency is a switch and the next one is a reversal. Inverting
(2.13), we then get
�ik D 2
!i
�
arctan!i
˛i
C �k
�
; 8k 2 Z
from which the results derived in Example 2.7 are recovered. O
Remark 2.6 (pulling ˛ out). It is readily seen that xR˛.s/ D Fu.� �1=s 1=s
2 �1
�
; ˛/. This form facilitates ex-
tracting ˛ from the LFT in Fig. 2.5 in the equivalent form presented in Fig. 2.6. The latter system, in turn,
36 CHAPTER 2. STABILITY ANALYSIS
is equivalent to Fu. QG; ˛I /, where
QG.s/ D
2
664
0 .I CD´w/�1C´ .D´w � I /.I CD´w /
�1 .I CD´w/�1
0 A � Bw.I CD´w/�1C´ 2Bw.I CD´w/
�1 Bu � Bw.I CD´w/�1D´u
I 0 0 0
0 Cy �Dyw.I CD´w/�1C´ 2Dyw.I CD´w /
�1 Dyu �Dyw.I CD´w/�1D´u
3
775
is the Redheffer start product QG D� �Gint Gint
2I �I
�
? G, where Gint is the integrator with the transfer function
Gint.s/ D 1=s � I . O
2.2 Lyapunov’s direct method
Another group of methods of analyzing the stability of time-delay systems is based on the idea of the
Lyapunov function. These methods are not in the center of these notes, so they are only outlined below.
There is a hefty literature on this subject, which can be consulted for more details and (sometimes) insight,
see e.g. [21, 27] and the references therein.
2.2.1 Ordinary differential equations
We start with a brief overview of Lyapunov’s direct (aka second) method for finite-dimensional systems
described by the equation
Px.t/ D f .x.t//; x.0/ D x0 (2.16)
for some x0 2 Rn and a locally Lipschitz continuous function f W R
n ! Rn such that f .0/ D 0. A key
idea is that the Lyapunov stability of this system can be analyzed in terms of a continuously differentiable
function V.x/ W Rn ! R such that V.0/ D 0 and V.x/ > 0 for all x ¤ 0. Such a function is known as the
Lyapunov function and its derivative along trajectories of (2.16) is defined as
PV .x/ ´ dV.x/
dtD @V .x/
@x
dx
dtD @V .x/
@xf .x/:
The following results hold:
1. If PV .x/ � 0 for all x, then (2.16) is Lyapunov stable.
2. If PV .x/ < 0 for all x ¤ 0, then (2.16) is asymptotically stable by Lyapunov.
3. If PV .x/ � 0 for all x and PV .x/ � 0 implies x � 0, then (2.16) is asymptotically stable by Lyapunov.
The last condition follows by LaSalle’s invariance principle, which says that if PV � 0, then all trajectories
accumulate in the set fx j PV .x/ D 0g. If the statements “for all x” above are understood as “for all x
in some neighborhood of the origin,” then properties are local. For global stability, holding for all initial
conditions x0 2 Rn, the statements above should be valid for all x 2 R
n, and also limkxk!1 V.x/ D 1.
A Lyapunov function can be interpreted as an energy function of the system.
The catch in the use of this method is to find a Lyapunov function, which might be a highly nontrivial
task in its own. The problem is perhaps best understood for linear systems, where f .x/ D Ax for some
matrix A 2 Rn�n. A handy choice in this case is a quadratic Lyapunov function of the form
V.x/ D x0Px (2.17)
for some P D P 0 > 0. With this choice PV .x/ D 2x0PAx D x0.PAC A0P /x and the system is (globally)
stable if there is P > 0 such that PAC A0P � 0. This requirement can be formulated as the existence of
P > 0 satisfying the Lyapunov equation PACA0PCC 0C D 0 for some C . In this case PV D �kCxk2 � 0.
2.2. LYAPUNOV’S DIRECT METHOD 37
If C has full row rank, then kCxk > 0 for all x ¤ 0 and we have asymptotic stability. In fact, it is sufficient
to have .C; A/ observable, because in that case Cx � 0 iff x � 0 and we can apply LaSalle’s invariance
principle to conclude about asymptotic stability. Note that although asymptotic stability does not imply
that kx.t/k is a monotonically decreasing function of t , the existence of a quadratic Lyapunov function
implies that in some coordinate bases, namely P 1=2x, the state of an asymptotically stable system does
decay monotonically. This property is useful in analyzing various classes of switched systems.
2.2.2 Delay-differential equations
Consider now the delay system in Fig. 2.1. Because the purpose of this section is to provide mainly a
flavor of the approach, in the analysis of its Lyapunov stability we consider only the (relatively simple)
situation of retarded DDEs, for D´w D 0. We can therefore concentrate on the retarded DDE
Px.t/ D Ax.t/C A�x.t � �/; Mx0 D � (2.18)
where A� ´ BwC´ and � 2 C n.Œ��; 0�/, i.e. is continuous. This is the autonomous version of (1.29)
with nonzero continuous initial condition. The solution of (2.18) exists on all RC and is continuous, i.e.
x 2 C n.R/. We may consider Mxt W C n.Œ��; 0�/ as the state vector of this system. If rankA� < n, this is not
a minimal state, cf. (2.2). Still, this choice is sufficient for our purposes.
Once the definition of state is clear, the extension of Lyapunov’s direct method to it is conceptually
straightforward. All we need is a function V. Mx/ possessing infinite-dimensional counterparts of properties
defined in ÷2.2.1. The analogy would be even stronger if DDE (2.18) was cast as an operator differential
equation in terms of .x.t/; Mxt/, see [9, Sec. 2.4]. There are also other approaches, apparently developed via
attempts to extend Lyapunov’s method in terms of functions V.x/ to DDEs back in ’50s, which endeavor
to keep conditions expressed in terms of x.t/ whenever possible. Two best known of them were put
forward by Razumikhin [60] and N. N. Krasovskii [30]. The latter is presented below.
For a continuous function V. Mx/ W C n.Œ��; 0�/ ! R, define its derivative along trajectories of (2.18) as
PV . Mxt/ ´ lim sup�#0
V. MxtC�/ � V. Mxt/
�:
The following result, which is a linear version of [21, Cor. 5.3.1] can then be formulated:
Theorem 2.5 (Krasovskii). If there is a continuous V. Mx/ W C n.Œ��; 0�/ ! R such that
V. Mxt/ � ˛.kx.t/k/ and PV . Mxt / � �ˇ.kx.t/k/
for functions ˛; ˇ W RC ! RC such that lim !1 ˛. / D 1, then (2.18) is Lyapunov stable. If, in addition,
ˇ. / > 0 for all > 0, then (2.18) is asymptotically stable.
Remark 2.7 (neutral DDEs). In the neutral delay case, like in (1.30), the Lyapunov analysis is quite similar.
An important fact is that, much like in the i/o stability case, the system is Lyapunov unstable whenever
�.E�/ > 1 and might be either stable or unstable if �.E�/ D 1, see [59] for more details. If �.E�/ < 1,
then essentially the only alteration to the conditions of Theorem 2.5 is the replacement of ˛.kx.t/k/ with
˛.kx.t/�E�x.t � �/k/. O
The dependences of functions ˛ and ˇ only on kx.t/k is a relaxation of expected conditions in terms
of a norm of Mxt (although the bound on the derivative can be interpreted in terms of LaSalle’s invariance
principle, as x � 0 obviously implies Mx � 0). These conditions imply that, in principle, the choice of the
Lyapunov function like in (2.17) is still legitimate, which is not quite obvious. However, such a choice
would be futile in most situations, because it is not sufficiently rich for dynamics like (2.18). Hence,
38 CHAPTER 2. STABILITY ANALYSIS
finding conditions on P guaranteeing stability is not normally possible for it. This is why more elaborate
functions are considered.
A natural general counterpart of the quadratic Lyapunov function (2.17) for DDE (2.18) would be
V. Mxt/ D x0.t /P0x.t/C 2x0.t /
Z 0
��
P0�.s/x.t C s/ds CZ 0
��
Z 0
��
x0.t C r/P��.r; s/x.t C s/dsdr (2.19)
for some matrix P0 D P 00 > 0 and functions P0� W Œ��; 0� ! R
n�n and P�� W Œ��; 0�� Œ��; 0�! Rn�n such
that P��.r; s/ D P 0�� .s; r/. Such functions are conventionally dubbed Lyapunov–Krasovskii functionals.
However, the form above might be “too rich.” Finding matrix functions for (2.19) might be a hard problem
to handle. For this reason, simpler particular cases of the quadratic function (2.19) are typically sought.
The example below illustrates the idea (other uses are discussed in Section 6.3).
Example 2.10. Consider DDE (2.18) and select
V. Mxt / D x0.t /P0x.t/CZ 0
��
x0.t C s/P�x.t C s/ds D x0.t /P0x.t/CZ t
t��
x0.s/P�x.s/ds
for matrices P0 D P 00 > 0 and P� D P 0
� > 0, which corresponds to P0�.s/ D 0 and P��.r; s/ D ı.r � s/P�
in (2.19). To take the derivative of this function along trajectories of (2.18), remember the Leibniz integral
rule,d
dt
Z b.t/
a.t/
f .s; t /ds DZ b.t/
a.t/
@
@tf .s; t /ds C db.t/
dtf .b.t/; t /� da.t/
dtf .a.t/; t /: (2.20)
It is then readily seen that
PV . Mxt / D 2x0.t /P0 Px.t/C x0.t /P�x.t/ � x0.t � �/P�x.t � �/
D 2x0.t /P0Ax.t/C 2x0.t /P0A�x.t � �/C x0.t /P�x.t/ � x0.t � �/P�x.t � �/
D�
x0.t / x0.t � �/��A0P0 C P0AC P� P0A�
A0�P0 �P�
� �x.t/
x.t � �/
�
:
Thus, if we can find P0 and P� such that
�
A0P0 C P0AC P� P0A�
A0�P0 �P�
�
< 0;
then the system is stable. In fact, it is then stable for every delay � , so we end up with delay-independent
stability. The condition above is linear in its two free parameters, P0 and P� . Such matrix inequalities are
known as linear matrix inequalities (LMIs) and can be efficiently solved. On the downside, the condition
above is normally conservative, i.e. the failure to solve the LMI above does not necessarily implies that
the system is unstable for some � .
Sometimes the condition is non-conservative though. For example, in the scalar case with A D �1and A� D a� the LMI above reads
�
P� � 2P0 P0a�
a�P0 �P�
�
< 0 ” P� > 0 ^ P� � 2P0 C a2�P
20
P�
< 0 ” .P� � P0/2 < .1 � a2
� /P20 ;
where the first equivalence relation follows by (A.6). The last inequality is solvable in P0 > 0 and P� > 0
iff ja� j < 1, as in that case an appropriate P� can be found for every given P0. At the same time, the use
of any of the methods discussed in Section 2.1 yields the conclusion that the system is delay-independent
stable iff a� 2 .�1; 1�. This implies that the Lyapunov–Krasovskii methods misses only stability under
a� D 1, i.e. it is practically non-conservative in this case. O
Chapter 3
Stabilization of Time-Delay Systems
Mille viæ ducunt homines per sæcula Romam
Alain de Lille, Liber Parabolarum
HAVING GRASPED BASIC IDEAS about the stability analysis of time-delay systems, we turn to stabi-
lization problems in this chapter. We shall be mostly concerned with dead-time systems, i.e. systems
comprising a finite-dimensional system connected in series with a delay element, like (1.10) on p. 4. The
morale of this chapter is that a right choice of the controller architecture can substantially simplify the
solution, rendering it essentially finite dimensional.
3.1 Stabilization of FOPTD systems by fixed-structure controllers
We start with a brief flirt with stabilization ideas for fixed-structure (finite-dimensional fixed-structure, to
be precise) controllers. To simplify the exposition, we consider only unstable FOPTD (first-order-plus-
time-delay) plants of the form
P�.s/ D e��s
s � 1 (3.1)
As discussed in ÷1.2.1, such models are important in applications, like process control. Moreover, in
some situations this model represents dynamics, left after other parts, stable and stably invertible, are
canceled by the controller. The choice of the plant pole at s D 1 can be done without loss of generality.
If P.s/ D a=.s � a/ for some a > 0, the pole can always be normalized by the substitution s ! as and
scaling � ! a� . In other words, the delay � in (3.1) should always be though of as the ratio between the
loop delay and the time constant 1=a of the finite-dimensional part of the plant dynamics.
The stabilization setup for this system is the standard unity-feedback architecture shown in Fig. 3.1.
As only the stability of this system is analyzed, the exogenous signals are not relevant. The characteristic
function of the closed-loop system in this case is
�� .s/ D .s � 1/MC .s/CNC .s/e��s; (3.2)
C.s/1
s � 1e��s
reu
d
y
n
�
Fig. 3.1: Unity-feedback control setup for a FOPTD plant
39
40 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
whereNC .s/ andMC .s/ are the numerator and denominator of the controller, C.s/ D NC .s/=MC .s/. This
is a single-delay quasi-polynomial of the form (2.4), which can be analyzed the delay sweeping method
of ÷2.1.5 for a given C.s/. In this section we consider a more challenging problem of analyzing �� .s/ as
functions of controller parameters.
3.1.1 Stabilizing PI controllers
Assume that the class of controllers to be analyzed is of the form
C.s/ D CPI.s/ ´ kp
�
1C 1
Tis
�
(3.3)
for some nonzero real kp (known as the proportional gain) and Ti (integral time or reset time). In this case
NC .s/ D kp.Tis C 1/ and MC .s/ D Tis and the characteristic function (3.2) reads
�� .s/ D Tis.s � 1/C kp.Tis C 1/e��s :
Form the function
�.!/ D T 2i !
2.!2 C 1/ � k2p .T
2i !
2 C 1/ D T 2i !
4 � T 2i .k
2p � 1/!2 � k2
p :
according to (2.10). The only positive-real solution of this equation, the crossing frequency, satisfies
!2c D 1
2
�
k2p � 1C
q
.k2p � 1/2 C 4k2
p=T2i
�
; (3.4)
which is a switch because d�.!/=d! D 2T 2i !.1 � k2
p C 2!2/ D 0 only at ! D 0 and !2 D .k2p � 1/=2,
both smaller than !2c . Hence, the closed-loop system can be stable for some delays only it is stable for
� D 0. The zero-delay characteristic polynomial,
�0.s/ D Tis2 C Ti.kp � 1/s C kp;
is Hurwitz iff all its coefficients are nonzero and have the same sign. If kp < 0, then we must have Ti < 0
and Ti.kp � 1/ < 0, which is a contradiction. Hence, all coefficients must be positive, which is the case iff
kp > 1 and Ti > 0: (3.5)
As a matter of fact, this implies that Ti!2c > kp > 1. The phase relation (2.8b) for this �� .s/ reads
�!c D arg.jTi!c C 1/ � arg.j!c � 1/ � �
2C .2k � 1/� D arg
jTi!c C 1
j!c � 1 C�
2k � 3
2
�
�
D arg.jTi!c C 1/.j!c C 1/
!2c C 1
C�
2k � 1
2
�
� D arg.1� Ti!2c C j.Ti C 1/!c/C
�
2k � 1
2
�
�:
Because Ti!2c > 1, we have that arg.1 � Ti!
2c C j.Ti C 1/!c/ 2 .�=2; �/ and the smallest crossing delay
corresponds to k D 1. Therefore, the closed-loop system is stable iff (3.5) holds and
0 � � <1
!c
arctanTi!
2c � 1
.Ti C 1/!c
< 1 (3.6)
for !c given by (3.4). This is a simple condition on � for given kp and Ti , but is far from being simple in
terms of controller parameters. Even in the P controller case, which corresponds to Ti D 1, the condition
0 � � <arctan
pk2
p � 1pk2
p � 1(3.60)
3.1. STABILIZATION OF FOPTD SYSTEMS BY FIXED-STRUCTURE CONTROLLERS 41
0
0:5
1
1:5
1=Ti
1 1:5 2 2:5 3kp
� > 0:1
� > 0:2
� > 0:3
� > 0:5
(a) in the .kp; 1=Ti/-plane
0
0:5
1
1:5
1=Ti
0 1:32 2:33!c
� > 0� > 0:1
� > 0:2
� > 0:3
� > 0:5
(b) in the .!c; 1=Ti/-plane
Fig. 3.2: Maximum stabilizing delay contours for the PI controller (3.3)
is not solvable in kp. All we can do is to plot stability contours of the delay in the .kp; 1=Ti/-plane shown
in Fig. 3.2(a). This plot can be used to choose suitable PI controller (3.3) for a given delay � .
The crossing frequency !c in (3.4) is the crossover frequency of the loop P�C . This is an important
characteristic of the control system in Fig. 3.1, it roughly indicates the frequency range in which feedback
is useful and the closed-loop bandwidth. It follows from (3.4) that any given crossover frequency can be
attained by the choice
kp D Ti!c
s
1C !2c
1C T 2i !
2c
;
which is admissible, i.e. results in kp > 1, iff !c > 1=pTi . In other words, there is a one-to-one corre-
spondence between !c > 1=pTi and kp > 1, so we can analyze the stability of the system in terms of its
crossover frequency instead of the proportional gain. This analysis may actually be more informative and
it enables us to end up with the following analytic bound on the integral time under a given delay � > 0:
Ti >1C !c tan.�!c/
!c.!c � tan.�!c//; (3.7)
which is admissible iff 0 < �!c < tan.�!c/ < �=2. The corresponding upper bounds on the reciprocal
integrator time 1=Ti as functions of !c are presented in Fig. 3.2(b) for several values of � .
3.1.2 Stabilizing PD controllers
Now consider the class of PD controllers of the form
C.s/ D CPD.s/ ´ kp.1C Tds/ (3.8)
for some nonzero proportional gain kp and real Td (derivative time). The characteristic function (3.2) for
this choice is
�� .s/ D s � 1C kp.1C Tds/e��s :
This is a neutral quasi-polynomial, so we first have to guarantee A2 on p. 28. This requires jkpTd j < 1,
which is assumed hereafter. The corresponding �.!/ is then
�.!/ D .1 � k2pT
2d /!
2 � .k2p � 1/:
42 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
�1
�0:5
0
0:5
1
Td
1 1:5 2 2:5 3kp
� > 0
� > 0:25
� > 0:5� > 0:75
� > 1
(a) in the .kp; Td/-plane
�1
�0:5
0
0:5
1
Td
0 1:13 2:33 3:43!c
� > 0
� > 0:25
� > 0:5
� > 0:75
� > 1
� > 1:5
(b) in the .!c; Td/-plane
Fig. 3.3: Maximum stabilizing delay contours for the PD controller (3.8)
Clearly, its solution must satisfy
!2c D
k2p � 1
1 � k2pT
2d
: (3.9)
Taking into account that jkpTd j < 1, this �.!/ has positive roots iff jkpj > 1. If this is the case, then the
crossing frequency is is a switch, because d�.!/=d! D 2.1 � k2pT
2d /! is positive there. Consider now
properties of the zero-delay version of the characteristic equation,
�0.s/ D .1C kpTd/s C kp � 1:
Because jkpTdj < 1, its leading coefficient is positive, so this polynomial is Hurwitz iff kp > 1. Thus, the
system is delay-independent unstable whenever kp � 1 and is stable for all delays below the first crossing
delay if kp > 1. This crossing delay can be found from the phase relation (under kp > 0)
�!c D arg.1C jTd!c/ � arg.�1� j!c/C .2k � 1/� D arctan.Td!c/C arctan!c C 2k�:
Because each one of the two first terms above is a function in .0; �=2/, the minimum positive solution of
this equation corresponds to k D 0. Hence, the system is stable iff
kp > 1; jTd j < 1
kp< 1; and 0 � � <
1
!c
�
arctan.Td!c/C arctan!c
�
� 2 (3.10)
for !c given by (3.9). The last relation is again not quite transparent in terms of the controller parameters
for a fixed � . The level curves in the .kp; Td/-plane presented in Fig. 3.3(a) can be used to choose kp and
Td.
Like in the PI case, the analysis is simplified if we consider the crossover frequency !c as the parameter
of choice instead of kp. This is possible owing to the relation
kp Ds
1C !2c
1C T 2d !
2c
;
which defines a one-to-one correspondence between !c > 0 and kp > 1 provided jTd j < 1. The bound on
� in (3.10) translated then to the following bound on the derivative time as a function of � and !c:
� 1 < tan.�!c � arctan!c/
!c< Td < 1; (3.11)
3.2. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: HISTORICAL DEVELOPMENTS 43
P.s/ e��s QC.s/
P.s/.1 � e��s/
rQeu
d
y
n Qy
-
Fig. 3.4: Unity-feedback system with Smith controller
which is nonempty iff 0 < �!c < arctan!c C �=2 < � . These bounds are presented in Fig. 3.3(b) and
may be useful to see limitations, imposed by the loop delay, on the attainable crossover frequency.
Although the examples above discuss simple controllers for a simple time-delay system, they can
be used to appreciate problems arising in more general situations. Namely, it is not hard to extrapo-
late that finding precise stabilizability conditions might be tremendously challenging and even if such
conditions can be found, they are seldom transparent and it might not be trivial to find a meaningful re-
parametrization, for which the result is intuitive. For these reasons, the design of fixed-structure finite-
dimensional stabilizing controllers is normally addressed via conservative methods, based on various kinds
of approximations and the use of robustness techniques (some of the are studied in Chapter 6).
3.2 Problem-oriented controller architectures: historical developments
The design of (low-order) fixed-structure controllers is a challenge even for finite-dimensional plants with
high-order dynamics. Conventional wisdom has it then that the complexity of the controller should be
comparable to that of the controlled process. In particular, there are systematic and intuitive methods to
design at least .n � p/-dimensional stabilizing controllers for n-dimensional LTI plants with p measured
non-redundant outputs, whereas no such methods exist for lower-order controllers in general.
The observation above suggests that the stabilization of (infinite-dimensional) time-delay systems may
require the use of infinite-dimensional controllers. This section aims at presenting developments in the
control literature since the late ’50s on methods to design infinite-dimensional controllers for time-delay,
mainly dead-time, systems, which exploit the structure of the delay element. These developments bore
several fruitful concepts, resulting in intuitive design procedures and implementable controller architec-
tures. The presentation below is technical, its main goal is to present a historical overview. The underlying
insights and somewhat more streamlined derivations of these approaches are then revealed in the next sec-
tion.
3.2.1 Dead-time compensation: Smith predictor and its modifications
The first problem-oriented controller architecture for dead-time systems was proposed by Otto J. M. Smith
[67] back in 1957. The controller, presented in Fig. 3.4, comprises two parts, a primary controller QC and a
Smith predictor P.1� xD� / in the internal feedback path of the controller. The overall controller C W y 7! u
in this case has the transfer function
C.s/ DQC.s/
1C QC.s/P.s/.1� e��s/: (3.12)
Although the controller is quite complex, the setup makes closed-loop properties more transparent. The
underlying logic of this architecture can be seen from the behavior of the signal
Qy D y C P.1� xD� /u D P. xD�uC d/C P.1� xD� /u D P.uC d/;
44 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
P.s/ e��s QC.s/
QP .s/ � P.s/e��s
rQeu
d
y
n Qy
-
Fig. 3.5: Unity-feedback system with modified Smith predictor
which can thus be seen as a predicted version of y. The use of the predicted output in lieu of y leads to
simpler closed-loop dynamics, as can be seen from the resulting control sensitivity,
Tc.s/ D C.s/e��s
1C P.s/C.s/e��sD
QC.s/e��s
1C QC.s/P.s/.1� e��s/C P.s/ QC.s/e��sD
QC.s/1C P.s/ QC.s/
µ QTc.s/;
and complementary sensitivity,
T .s/ D P.s/e��sTc.s/ D P.s/ QC.s/1C P.s/ QC.s/
e��s µ QT .s/e��s;
transfer functions. Thus, the control sensitivity function turns completely delay free and the complemen-
tary sensitivity function is left with only an input delay. From the stabilization point of view, a key fact is
that the delay is not present in the denominators of both these closed-loop transfer functions. As a result,
if the primary controller QC stabilizes the delay-free systems QT and QTc, then C defined by (3.12) renders
the actual T and Tc stable as well. This remarkable property facilitates the design of the primary controller
for the delay-free version of the plant, P , and then implementing it in the Smith form of Fig. 3.4. Also
note that although the overall controller in (3.12) is infinite dimensional, it is readily implementable. All
we need is to realize a delay line and a finite-dimensional plant model in the internal feedback of the
controller.
The situation is not that simple though. Consider the disturbance sensitivity transfer function
Td.s/ D P.s/.1� T .s// D P.s/
�
1 � P.s/ QC.s/1C P.s/ QC.s/
e��s
�
D P.s/
1C P.s/ QC.s/C ŒP .s/�2 QC.s/.1 � e��s/
1C P.s/ QC.s/µ QTd.s/C QT .s/P.s/.1� e��s/:
Unless plant poles are canceled in the predictor P.s/.1 � e��s/, they are the poles of the disturbance
sensitivity Td.s/ as well. Hence, if P.s/ has unstable poles other than single poles at j2k�=� for k 2 Z,
the closed-loop system is internally unstable as well, no matter what primary controller is chosen. This
implies that the Smith controller is not applicable to unstable plants in most cases.
However, instability under unstable plants is not an intrinsic property of predictor-based schemes. A
possible fix was proposed by Watanabe and Ito in [77]. The idea is paraphrased in Fig. 3.5, where the
prediction element QP �P xD� is based on a finite-dimensional system QP , which is not necessarily the same
as the plant P . With this choice, the signal Qy D QPu C Pd is no longer a prediction of the zero-delay
output. Still, this is a delay-free response, so that the architecture presented in Fig. 3.5 is dubbed the
dead-time compensator. The four closed-loop transfer functions of interest are then
�
S.s/ Td.s/
Tc.s/ T .s/
�
D�
1 QP .s/ � P.s/e��s
0 1
� � QS.s/ QTd.s/QTc.s/ QT .s/
� �
1 � QP.s/C P.s/e��s
0 1
�
; (3.13)
where � QS.s/ QTd.s/QTc.s/ QT .s/
�
´ 1
1C QP .s/ QC.s/
�1
QC.s/
��
1 QP .s/�
3.2. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: HISTORICAL DEVELOPMENTS 45
are the closed-loop transfer functions associated with the unity-feedback interconnection of delay-free QPand QC . If the finite-dimensional system QP is such that the prediction block QP � P xD� is itself stable, then
�
1 0QP.s/ � P.s/e��s 1
�
and
�
1 0
� QP .s/C P.s/e��s 1
�
D�
1 0QP.s/ � P.s/e��s 1
��1
are both bi-stable and �
S Tc
Td T
�
2 H1 ”� QS QTc
QTdQT
�
2 H1:
This, in turn, implies that the stabilization problem for the input-delay plant P xD� reduces to that for the
delay-free QP .
Thus, the stabilization problem boils down to finding a finite-dimensional QP for which QP � P xD�
is stable. This turns out to be always possible. The choice proposed in [77], based on the state-space
realization of P.s/ D C.sI � A/�1B , is
QP .s/ D C e�A� .sI � A/�1B (3.14)
and has the same order and the same poles as P.s/. In this case the transfer function of the predictor,
known as the modified Smith predictor (MSP), is
QP .s/ � P.s/e��s D C.e�A� � e��sI /.sI � A/�1B D C e�A�
Z �
0
e�.sI�A/tdtB: (3.15)
This is an entire function, bounded in C0. Hence, it belongs to H1 and the MSP is stable. There are
other choices, some of which are proposed in [77], all of them are based on canceling unstable poles of
P.s/ in the predictor QP .s/ � P.s/e��s. Because of those cancellations, certain care should be taken in
implementing such predictors, this issue is discussed in Chapter 5. But, in any case, the use of the dead-
time compensation (DTC) architecture renders the stabilization problem essentially finite dimensional,
which is a clear advantage in comparison with the methods discussed in Section 3.1.
3.2.2 Finite spectrum assignment
Consider now the system
Px.t/ D Ax.t/C Bu.t � �/; (3.16)
which is a dead-time system with the whole state of its delay-free part measurable. We assume that .A; B/
is stabilizable. Conceptually, the stabilization problem for it would be simple if we measured the future
vector x.tC �/ at each t . The control law u.t/ D Fx.tC �/CFvv.t/, where v is an exogenous signal and
Fv is some gain, is obviously stabilizing if ACBF is Hurwitz. Finding an appropriate F is a standard step
of designing a state feedback controller for the zero-delay version of (3.16). Of course, the control law
above, based on x.t C �/, is not implementable. But we may try to imitate it by replacing x.t C �/ with
its prediction. This logic is reminiscent of that of observer-based feedback, where unmeasurable state is
replaced with its estimate.
A prediction of x can be obtained by solving (3.16),
x.t C �/ D eA�x.t/CZ tC�
t
eA.tC��r/Bu.r � �/dr D eA�x.t/CZ t
t��
eA.t�r/Bu.r/dr:
This is a linear function of the state of (3.16), which is .x.t/; Mut/ as discussed in ÷1.2.1, and is causally
implementable. Thus, a candidate for the predictor-based feedback is
u.t/ D F
�
eA�x.t/CZ t
t��
eA.t�r/Bu.r/dr
�
C Fvv.t/; (3.17)
46 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
where F is such that AC BF is Hurwitz.
To analyze the stability of the closed-loop system with the control law (3.17), rewrite it in the Laplace
domain. To this end, note that the last term in it is the convolution of u.t/ and eAtB1Œ0;��.t /, where 1Œ0;��.t /
is the indicator function of the interval Œ0; � �. As such, the Laplace transform of it is the product of the
Laplace transforms of each one of these functions, i.e.
L
�Z t
t��
eA.t�r/Bu.r/dr
�
DZ 1
0
eAtB1Œ0;��.t /e�stdtU.s/ D
Z �
0
e�.sI�A/tdtBU.s/: (3.18)
Thus, (3.17) can be written as
�
I � FZ �
0
e�.sI�A/tdtB
�
U.s/ D F eA�X.s/C FvV.s/:
Combining this relation with the Laplace-domain version of (3.16), we end up with the following closed-
loop equations:�
sI � A �Be��s
�F eA� I � FR �
0e�.sI�A/tdtB
� �
X.s/
U.s/
�
D�
0
Fv
�
V.s/:
The stability of these dynamics can be analyzed via its characteristic function, which determines the
existence of a nontrivial free motion under v D 0. By the logic of ÷2.1.1, this function is
��;cl.s/ D det
�
sI � A �Be��s
�F eA� I � FR �
0e�.sI�A/tdtB
�
: (3.19)
Although the task of finding its rots appears complicated, it is actually not. Just note that
e�A� .sI � A/Z �
0
e�.sI�A/tdtB D e�A�B � Be��s;
cf. the second equality of (3.15). Therefore,
�
sI � A �Be��s
�F eA� I � FR �
0e�.sI�A/tdtB
�
D�
sI � A e�A� .sI � A/R �
0e�.sI�A/tdtB � e�A�B
�F eA� I � FR �
0e�.sI�A/tdtB
�
D�
sI � A �e�A�B
�F eA� I
� �
I e�A�R �
0e�.sI�A/tdtB
0 I
�
and
��;cl.s/ D det
�
sI � A �e�A�B
�F eA� I
�
D det.sI � A � e�A�BF eA� / D det.e�A� .sI � A � BF /eA� /
D det.sI � A � BF /;
where the second equality follows by (A.5b). In other words, ��;cl.s/ in (3.19) is actually a plain polyno-
mial of degree n, whose roots are the eigenvalues of A C BF , exactly as in the delay-free case. For this
reason, the strategy behind control law (3.17) is called the finite spectrum assignment (FSA). The term
was coined by Manitius and Olbrot in their 1979 paper [33], where the stabilization problem in a slightly
different, albeit essentially equivalent, form was studied. The ideas behind controller (3.17) can be traced
back to the late ’60s, see [16, 35, 29], where similar configurations were discussed in various contexts.
Remark 3.1 (multiple delays). Remarkably, the FSA approach extends to a rather general class of multiple
input delay systems. Although such kinds of extensions are not as intuitive as the predictor-based approach
described above, they can be deduced from the analysis above and still produce finite closed-loop spectra.
3.2. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: HISTORICAL DEVELOPMENTS 47
For example, consider a plant described by the equation Px.t/ D Ax.t/CP
i Biu.t � �i/ for delays �i � 0.
The control law
u.t/ D F
�
x.t/CX
i
Z t
t��i
eA.t��i �r/Bu.r/dr
�
C Fvv.t/
assigns the spectrum of the closed-loop system to the roots of det.sI � A �P
i e�A�iBiF /, which is a
polynomial of degree n, and the stabilization is possible iff .A;P
i e�A�iBi/ is stabilizable. O
If only a part of the vector x is measured, say if (3.16) is complemented by the measurement equation
y.t/ D Cx.t/, then (3.17) should be complemented by an observer of x, resulting in the control law
POx.t/ D A Ox.t/C Bu.t � �/ � L.y.t/� C Ox.t// (3.20a)
u.t/ D F
�
eA� Ox.t/CZ t
t��
eA.t�r/Bu.r/dr
�
C Fvv.t/; (3.20b)
for some L such that AC LC is Hurwitz, which exists iff .C; A/ is detectable. With the standard trick of
replacing Ox.t/ with the observer error e.t/ ´ x.t/ � Ox.t/ by a similarity transformation, the closed-loop
system reads in the Laplace domain as
2
4
sI � A � LC 0 0
0 sI � A �Be��s
F eA� �F eA� I � FR �
0e�.sI�A/tdtB
3
5
2
4
E.s/
X.s/
U.s/
3
5 D
2
4
0
0
Fv
3
5V.s/:
Its characteristic function
��;cl.s/ D det
2
4
sI � A � LC 0 0
0 sI � A �Be��s
F eA� �F eA� I � FR �
0e�.sI�A/tdtB
3
5 D det.sI �A�LC/ det.sI �A�BF /
is again a polynomial, now having the degree 2n, coinciding with that in the delay-free observer-based
feedback. Controller (3.20), dubbed observer-predictor, was analyzed by Furukawa and Shimemura in
[17], although perhaps appeared for the first time in Mayne’s paper [35] of 1968 and also as a part of the
LQG optimal control law in Kleinman’s paper [29] of 1969.
3.2.3 Kwon–Pearson–Artstein reduction
A related, albeit a bit more general, approach was proposed by Kwon and Pearson in [31] and then ex-
tended by Artstein in [2]. Consider again (3.16) and introduce the variable
Qx.t/ ´ x.t/CZ t
t��
eA.t���r/Bu.s/dr;
which is effectively a prediction of e�A�x.t C �/. Differentiating this variable results in the relation
PQx.t/ D Px.t/C A
Z t
t��
eA.t���r/Bu.r/dr C e�A�Bu.t/� Bu.t � �/
D Ax.t/C A
Z t
t��
eA.t���r/Bu.r/dr C e�A�Bu.t/;
where the first equality is obtained via the Leibniz integral rule (2.20) on p. 38 and the second equality
follows by (3.16). Thus, the variable Qx satisfies the ODE
PQx.t/ D A Qx.t/C e�A�Bu.t/: (3.21)
48 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
Note that controllability-related characteristics of .A; e�A�B/ are equivalent to those of .A; B/. System
(3.21) is called the reduced system and it can be stabilized by standard methods. For example, if the
state-feedback u.t/ D QF Qx.t/C Fvv.t/ is used for it, then the reduced system is stable iff AC e�A�B QF is
Hurwitz. But then
x.t/ D Qx.t/�Z t
t��
eA.t���r/B QF Qx.r/dr
is bounded whenever so is Qx, merely by the triangle inequality and the boundedness of ke�AsB QF k for
s 2 Œ0; � �. Thus, the stabilization problem for (3.16) is yet again reduced to the stabilization of a delay-
free system, this time (3.21). The stabilizing controller in terms of the original vector x is given by
u.t/ D QF�
x.t/CZ t
t��
eA.t���r/Bu.r/dr
�
C Fvv.t/; (3.22)
which actually equals the FSA control law (3.17) under QF D F eA� .
Remark 3.2 (time-varying delays). The main difference of the reduction approach from the FSA is in their
analyses of the closed-loop stability. While the FSA does that via the characteristic function, the reduction
approach uses time-domain arguments. As such, they can be applied to time-varying delays as well. As an
example, consider the system Px.t/ D Ax.t/CBu.�.t// for some absolutely continuous function �.t/ � t
such that P�.t/ > 0, which is a special case of the system analyzed in [2, Ex. 5.6]. This is an input-delay
system with the time-varying delay �.t/ D t � �.t/ � 0. Denote by ��1.t / the inverse of �.t/, i.e. the
value of r for which �.r/ D t . It is unique whenever P�.t/ > 0 and satisfies ��1.t / � t . The function
Qx.t/ D x.t/CZ t
�.t/
eA.t���1.r//B P��1.r/u.r/dr
satisfies then the reduced equation
PQx.t/ D A Qx.t/C e�A.��1.t/�t/B P��1.t / u.t/;
which is free of delays. Actually, the variable Nx.t/ ´ eA.��1.t/�t/ Qx.t/ is the prediction of x.��1.t //, so
the variable ��1.t / � t � 0 can be thought of as the prediction horizon required for the delay t � �.t/.
Moreover, it can be shown that PNx.t/ D A Nx.t/C Bu.t/, so its dynamics are time invariant. Yet, similarly
to the discussion in Remark 3.1, the prediction interpretation is not evident in extending the approach to
multiple-delay systems of the form Px.t/ D Ax.t/CP
i Biu.�i .t //, in which case reduced dynamics are
intrinsically time varying. O
3.2.4 Connections
Remarkably, all three approaches studied in Section 3.2 share essentially the same structure of their con-
trollers. Namely, they all result in controllers having internal feedback of the form presented in Fig. 3.6,
where the “primary controller” QC is designed for a delay-free system and the “dead-time compensation”
block ˘ is determined by the plant (exogenous signals are taken zero for brevity). Indeed, the Smith
controller in Fig. 3.4 corresponds to this form under ˘.s/ D P.s/.1� e��s/ and the primary controller QCdesigned for P . The MSP in Fig. 3.5 chooses ˘.s/ in form (3.15). The FSA controller (3.17) can be cast
in the form presented in Fig. 3.6 under y D x, the static QC.s/ D F eA� , and
˘.s/ D L˚
eA.t��/B1Œ0;��
D e�A�
Z �
0
e�.sI�A/tdtB;
cf. (3.18). Likewise, the controller of (3.22) is the same modulo the choice of QC.s/ D QF . In fact, the
multiple-delay extension discussed in Remark 3.1 can also be cast in the form presented in Fig. 3.6 with a
static QC and a more elaborate ˘ .
3.3. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: CONTROL-THEORETIC INSIGHT 49
QC
˘
yu
Fig. 3.6: Common controller structure (DTC, FSA, reduction)
Moreover, the observer-predictor in (3.20) is also of the same form. This is not obvious, but can be
proved. To this end, introduce the variable
Nx.t/ ´ eA� Ox.t/CZ t
t��
eA.t�r/Bu.r/dr;
with which (3.20b) reads u.t/ D F Nx.t/ (remember, we assume here that v D 0). Using again the Leibniz
integral rule (2.20), we have:
PNx.t/ D eA��
A Ox.t/C Bu.t � �/ � L.y.t/� C Ox.t//�
C A
Z t
t��
eA.t�r/Bu.r/dr C Bu.t/� eA�Bu.t � �/
D A Nx.t/C Bu.t/� eA�L.y.t/� C Ox.t//:
Substituting Ox.t/ D e�A� Nx.t/�e�A�R t
t��eA.t�r/Bu.r/dr to this expression, we end up with the following
equivalent form of (3.20):
PNx.t/ D .AC eA�LC e�A�/ Nx.t/C Bu.t/ � eA�L
�
y.t/C C e�A�
Z t
t��
eA.t�r/Bu.r/dr
�
(3.20a0)
u.t/ D F Nx.t/: (3.20b0)
But this control law is exactly in the form presented in Fig. 3.6, with
QC.s/ D�AC BF C eA�LC e�A� �eA�L
F 0
�
;
which is the observer-based controller for QP given by (3.14), and ˘.s/ in form (3.15). In other words,
the MSP of Watanabe–Ito with the observer-based primary controller is identical to the observer-predictor
controller of Furukawa–Shimemura. These two controllers were regarded different for some time, until
their equivalence was shown in [46].
3.3 Problem-oriented controller architectures: control-theoretic insight
The fact that independently developed control methods produce the same controller architecture is in-
triguing. A somewhat superficial explanation of that might be given via recalling that these methods have
prediction ideas in their origin. However, prediction is not a part of the classical control-theoretic toolkit
and is often viewed as an incautious business, especially from the robustness point of view. On top of this,
neither the MSP nor multiple-delay extensions of the FSA / reduction approaches is readily interpretable
as a predictor-based controller. Moreover, however natural and clever the guesses discussed in this sec-
tion might be, there is no indication whether they are really justified. We thus shall look for more solid
control-theoretic explanations, for which we yet again turn to the discrete-time version of the problem.
3.3.1 Gaining insight via discrete-time systems: state feedback
Consider the input-delay system
xŒt C 1� D AxŒt �C BuŒt � ��; (3.23)
50 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
where xŒt � 2 Rn, uŒt � 2 R
m, � 2 N, and the whole x is measurable. To simplify the exposition, assume
that .A; B/ is controllable. We already know, cf. (1.11) on p. 5, that the state equation of this system is
2
666664
xŒt C 1�
uŒt � � C 1�:::
uŒt � 1�uŒt �
3
777775
„ ƒ‚ …
x� ŒtC1�
D
2
666664
A B 0 � � � 0
0 0 I � � � 0:::::::::: : :
:::
0 0 0 � � � I
0 0 0 � � � 0
3
777775
„ ƒ‚ …
A�
2
666664
xŒt �
uŒt � ��:::
uŒt � 2�
uŒt � 1�
3
777775
„ ƒ‚ …
x� Œt�
C
2
666664
0
0:::
0
I
3
777775
„ƒ‚…
B�
uŒt �: (3.24)
The first important observation regarding the model above is that if x is assumed to be measurable,
then so should be the whole state x� of (3.24). Indeed, the other components of x� are the history of
the input signal u, which is generated by us and thus should be available in virtually every reasonable
scenario. Consequently, we can implement the standard state-feedback law for (3.24) and it is of the form
uŒt � D�
Fx F� � � � F2 F1
�
x� Œt � D FxxŒt �C�
X
iD1
FiuŒt � i �: (3.25)
This is a dynamic control law if considered as a mapping x 7! u and its structure is reminiscent of that
in Fig. 3.6, under y D Fxx, QC D I , and ˘.´/ DP
i Fi´�i . This suggests that the internal feedback in
Fig. 3.6 with an FIR ˘ , whose impulse response has support in Œ0; � �, is merely a static state feedback
acting on the component Mui of the state of input-delay systems. This is a well-justified choice from the
control-theoretic perspective.
The second important observation about model (3.24) is that not all modes of A� would make sense to
move by feedback. To see this, note that
rank�
A� � �I B�
�
D rank
2
66666664
A � �I B 0 0 � � � 0 0
0 ��I I 0 � � � 0 0
0 0 ��I I � � � 0 0:::
::::::
:::: : :
::::::
0 0 0 0 � � � I 0
0 0 0 0 � � � ��I I
3
77777775
D rank�
A � �I B�
C �m;
so the only possibly uncontrollable modes of the realization in (3.24) are those of .A; B/. As the latter
pair is assumed to be controllable, so is .A� ; B�/ and all nCm� modes of (3.24) can be freely assigned by
static state feedback of the form (3.25). But m� eigenvalues of A� are already in a perfect location, at the
origin. Such kind of eigenvalues, known as deadbeat, are obviously stable and are normally considered
welcome, albeit expensive to attain. Yet now we have deadbeat modes for free, so it would make perfect
sense to keep them untouched, which would also keep the control energy low. With this logic in mind, the
choice of the feedback gain in (3.25) should aim only at assigning a subset of the modes of A� , those of
A. To understand this kind of pole assignment problem, some preliminary results are required.
Preliminary: partial pole placement by state feedback
Consider the state-feedback problem for the n-order delay-free system
xŒt C 1� D AxŒt �C BuŒt �
and suppose that only Qn < n eigenvalues of A should be moved by feedback, while the other n � Qn are to
remain untouched. Perhaps the easiest way to visualize this process, while avoiding the use of the invariant
3.3. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: CONTROL-THEORETIC INSIGHT 51
subspace notion, is via applying a similarity transformation T , bringing the realization above to the form
T xŒt C 1� D TAT �1T xŒt �C TBuŒt � D� QA 0
� NA
�
T xŒt �C� QB
NB
�
uŒt �; (3.26)
where QA 2 RQn�Qn contains all eigenvalues of A that have to be shifted, NA 2 R.n�Qn/�.n�Qn/ contains those to
be kept untouched, and “�” denotes an irrelevant block. It is readily seen that uncontrollable modes of
. QA; QB/ are also uncontrollable modes of .A; B/. An obvious choice of the required state-feedback gain for
this realization is F D� QF 0
�
, where QF is such that the eigenvalues of QAC QB QF are assigned to required
positions.
An important fact is that the resulted control law in the original coordinates,
uŒt � D� QF 0
�
T xŒt � D QF�
I 0�
T xŒt �;
does not require the whole similarity transformation matrix T , but only its Qn first rows. An exhaustive
characterization of this part of T is given by the following result:
Lemma 3.1. Let �shift � spec.A/ contain all modes of A that we need to shift by feedback. A Qn� n matrixQT is the first Qn rows of a nonsingular T 2 R
n�n such that
TAT �1 D� QA 0
� NA
�
for some QA 2 RQn�Qn such that spec. QA/ D �shift iff
QTA D QA QT and rank QT D Qn: (3.27)
Proof. The “only if” statement follows from the relation
TA D� QA 0
� NA
�
T H)�
IQn 0�
TA D�
IQn 0�� QA 0
� NA
�
T D QA�
IQn 0�
T:
To show the “if” statement, assume that there is a full-rank QT satisfying (3.27). There is then a full-rankNT 2 R
.n�Qn/�n such that T ´� QT 0 NT 0 �0
is nonsingular. In this case QT T �1 D�
IQn 0�
and we have that
�
IQn 0�
TAT �1 D QTAT �1 D QA QT T �1 D� QA 0
�
;
as required.
Thus, all we need is to solve (3.27) in QT and QA, whose spectrum coincides with the part of spec.A/
that is planned to be moved, and implement the control law
uŒt � D QF QT xŒt �
for QF assigning the Qn eigenvalues of QAC QTB QF to desired positions.
Partial pole placement for (3.24)
Now return to the state equation (3.24). We know what part of its dynamics we need to move, so consider
the following version of (3.27) for it:
�
Tx T� � � � T2 T1
�
„ ƒ‚ …
QT
2
666664
A B 0 � � � 0
0 0 I � � � 0:::::::::: : :
:::
0 0 0 � � � I
0 0 0 � � � 0
3
777775
D�
TxA TxB T� � � � T2
�
„ ƒ‚ …
QT A�
D A�
Tx T� T��1 � � � T1
�
„ ƒ‚ …
QT
;
52 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
where the choice of “ QA” is an educated guess (it has to have the same spectrum as A, but need not be equal
A in general). It is not hard to see that this equality is solved by
�
Tx T� � � � T2 T1
�
D�
A� A��1B � � � AB B�
; (3.28)
which has full rank iff .A; B/ has no uncontrollable modes at the origin. By the controllability assumption,
the full rank property is thus guaranteed. With this choice, QTB� D B and the controller gain QF is thus
designed for the delay-free pair .A; B/. The choices of QA and QT above are not unique, but they do appear
natural and they lead to an interpretable controller.
Remark 3.3 (controllability of .A; B/). The assumption that .A; B/ simplifies the exposition above, in
particular, arguments about the assignment of modes of A and the proof that the choice of QT as in (3.28)
has full rank. However, the result holds even if the milder condition of the stabilizability of .A; B/ is
assumed. In that case only controllable modes of .A; B/ can be assigned, which is obvious. Also, if
.A; B/ has uncontrollable modes at the origin, then the matrix in (3.28) has a reduced rank. But this not a
problem, as by the design logic those deadbeat modes are not intended to be shifted anyway. O
Thus, a version of (3.25) that does not touch the m� deadbeat modes of (3.24) is
uŒt � D F
�
A�xŒt �C�
X
iD1
Ai�1BuŒt � i ��
D F
�
A�xŒt �Ct�1X
rDt��
At�r�1BuŒr�
�
; (3.29)
where F assigns the eigenvalues of ACBF to desired positions, which are the closed-loop eigenvalues of
the input-delay system (3.23). It is readily seen that this control law is based on the predicted xŒt C ��. As
such, it is the perfect discrete-time counterpart of the prediction-based controller (3.17) under v D 0. Just
now it is derived by a conventional state-feedback rationale and thus has a solid justification. Namely, the
predictive feedback is merely a static state feedback that potentially alters only the poles of the delay-free
part of the open-loop system.
The same rationale applies to the continuous-time law (3.17). It is nothing but the static state feedback
law, shifting only the finite modes of the open-loop plant, which are the eigenvalues of A in (3.16). This
state feedback happens to be a predictor in the single-delay case, but might not be readily interpretable
this way for multiple-delay systems.
3.3.2 Gaining insight via discrete-time systems: output feedback
Now consider the system
xŒt C 1� D AxŒt �C BuŒt � ��yŒt � D CxŒt �CDuŒt � ��
(3.30)
in which only a part of the delay-free plant state is measured. We assume that .A; B/ is stabilizable
and .C; A/ is detectable. The state-space realization of this system given by (1.11) on p. 5 assumes that
the system is an operator u 7! y. This assumption may be natural in analyzing the behavior of y, but
does not reflect actual measurement variables. This is because we have also measurements of the last m�
components of the state variable x� Œt � defined in (3.24). Thus, the accurate measurement equation now is
2
666664
yŒt �
uŒt � ��:::
uŒt � 2�uŒt � 1�
3
777775
„ ƒ‚ …
y� Œt�
D
2
666664
C D � � � 0 0
0 I � � � 0 0:::
:::: : :
::::::
0 0 � � � I 0
0 0 � � � 0 I
3
777775
„ ƒ‚ …
C�
2
666664
xŒt �
uŒt � ��:::
uŒt � 2�uŒt � 1�
3
777775
„ ƒ‚ …
x� Œt�
C
2
666664
0
0:::
0
0
3
777775
„ƒ‚…
D�
uŒt �; (3.31)
3.3. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: CONTROL-THEORETIC INSIGHT 53
which complements the state equation (3.24). It is straightforward to show that unobservable modes of
the pair .C� ; A�/ are those of .C; A/. Hence, the realization (3.24), (3.31) is detectable.
Equation (3.31) still does not measure the whole state of the system. It is then natural to implement
the state-feedback controller (3.29) in combination with a state observer. Moreover, if u can be measured
without noise, then it is well justified to consider a reduced-order observer [24, Sec. 4.3], reconstructing
only the x component of x� using n-dimensional dynamics. The structure of A� and C� facilitates the
construction of a reduced-order observer for x directly from (3.30). It is readily verified that
OxŒt C 1� D A OxŒt �C BuŒt � �� � L.yŒt � � C OxŒt � �DuŒt � ��/
yields the following equation for the estimation error eŒt � ´ xŒt � � OxŒt �:
eŒt C 1� D .AC LC/eŒt �:
Thus, the stabilizing controller for (3.30) of the form
OxŒt C 1� D A OxŒt �C BuŒt � �� � L.yŒt � � C OxŒt � �DuŒt � ��/ (3.32a)
uŒt � D F
�
A� OxŒt �Ct�1X
rDt��
At�r�1BuŒr�
�
; (3.32b)
results in the closed-loop dynamics with the characteristic equation
��;cl.´/ D ´m� det.´I � A � BF / det.´I � A � LC/:
This implies that the observer-predictor control law is merely an observer-based feedback, featuring a
reduced-order observer (justified by perfect measurements of u) and state feedback keeping m� open-loop
modes at the origin untouched. This appears to be a well-justified controller architecture.
Remark 3.4 (noisy measurements of u). If, for whatever reason, the past control signals used in (3.32b)
are corrupted by noise, then we shall build a full-order observer instead of (3.32a). But even then the
structure of the system dynamics in (3.24) and (3.31) can be exploited to end up with simpler solution
formulae. O
3.3.3 Intermezzo: Fiagbedzi–Pearson reduction for systems with internal delays
The logic above can be used to address the stabilization of a substantially wider and more challenging
class of systems. Below we consider the approach of Fiagbedzi and Pearson [12], applied to the general
single-delay interconnection of Fig. 1.7 on p. 10, whose dynamics are described by (1.27).
Assume that the whole state of the system, which is .x.t/; M t/, is measurable, so only the “state
dynamics” part,�
Px.t/´.t/
�
D�
A Bw
C´ D´w
� �
x.t/
´.t � �/
�
C�
Bu
D´u
�
u.t/; (3.33)
is required. The characteristic function �� .s/ associated with this equation is given by (2.3) on p. 21. This
is a quasi-polynomial with an infinite number of roots. As such, finite spectrum assignment should not
be expected, in general. Still, under the already familiar assumption that �.D´w/ < 1, there is only a
finite number roots of �� .s/ in the closed right-hand plane xC0. Hence, the ideas of FSA / reduction can be
applied only to unstable open-loop poles.
Introduce the Qn-dimensional signal
Qx.t/ ´ Qx.t/CZ t
t��
eQA.t�r/R´.r/dr
54 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
for some matrices QA 2 RQn�Qn, Q 2 R
Qn�n, and R 2 RQn�m� to be determined. Differentiating this variable
and using the Leibniz integral rule (2.20) and the relations of (3.33) for Px and ´, we have that
PQx.t/ D Q Px.t/CR´.t/� eQA�R´.t � �/C QA
Z t
t��
eQA.t�r/R´.r/dr
D Q�
Ax.t/C Bw´.t � �/C Buu.t/�
CR.C´x.t/CD´w´.t � �/CD´uu.t//� eQA�R´.t � �/
C QAZ t
t��
eQA.t�r/R´.r/dr
D QA Qx.t/C .QBu CRD´u/u.t/C .QACRC´ � QAQ/x.t/C .QBw CRD´w � eQA�R/´.t � �/:
The dependence on x and ´ can be eliminated if QA, Q, and R are chosen to satisfy the equation
�
Q R�
�
A Bw
C´ D´w
�
D� QAQ e
QA�R�
: (3.34)
If such matrices exist, then Qx verifies the finite-dimensional dynamics
PQx.t/ D QA Qx.t/C QBu.t/;
where QB ´ QBu CRD´u.
To understand equation (3.34), pre-multiply both its sides by a left eigenvector �i 2 CQn of QA, corre-
sponding to some �i 2 spec. QA/. Using the fact that �0ie
QA� D e�i ��0i , we then obtain the equality
�0i
�
Q R�
�
�iI � A �Bwe��i �
�C´ I �D´we��i �
�
D 0:
Hence, if �0i
�
Q R�
¤ 0, then every eigenvalue �i of QA necessarily satisfies �� .�i/ D 0, i.e. it is a root
of the characteristic function of the open-loop system (3.33). For that reason equation (3.34) is referred
to as the left characteristic matrix equation. But the relation above implies that (3.34) can be viewed as a
counterpart of (3.27). Thus, the control law u.t/ D QF Qx.t/C Fvv.t/ or, equivalently,
u.t/ D QF�
Qx.t/CZ t
t��
eQA.t�r/R´.r/dr
�
C Fvv.t/ (3.35)
is a state-feedback law assigning only the part of the open-loop spectrum, that of the matrix QA, to QAC QB QF .
This conclusion can be supported by analyzing the closed-loop characteristic function, which can be
derived from the relation
2
4
sI � A �Bwe��s �Bu
�C´ I �D´we��s �D´u
� QFQ � QFR �
0e�.sI� QA/tdtR I
3
5
2
4
X.s/
Z.s/
U.s/
3
5 D
2
4
0
0
Fv
3
5V.s/;
which represents the system dynamics (3.33) and the control law (3.34) in the Laplace transform domain.
The closed-loop characteristic function for this system is
��;cl.s/ D det
2
4
sI � A �Bwe��s �Bu
�C´ I �D´we��s �D´u
� QFQ � QFR �
0e�.sI� QA/tdtR I
3
5 :
Using equations (A.5) and the equality
�
QR �
0e�.sI� QA/tdtR
�
D .sI � QA/�1�
Q R�
�sI � A �Bwe��s
�C´ I �D´we��s
�
;
3.3. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: CONTROL-THEORETIC INSIGHT 55
which can be verified by straightforward algebra via (3.34), we have:
��;cl.s/ D det
��
sI � A �Bwe��s
�C´ I �D´we��s
�
��
Bu
D´u
�
QF�
QR �
0e�.sI� QA/tdtR
��
D det
�
I ��
Bu
D´u
�
QF .sI � QA/�1�
Q R��
det
�
sI � A �Bwe��s
�C´ I �D´we��s
�
D det.sI � QA/�1 det
2
4
I 0 BuQF
0 I D´uQF
Q R sI � QA
3
5 det
�
sI � A �Bwe��s
�C´ I �D´we��s
�
D det.sI � QA � QB QF / �� .s/
det.sI � QA/:
This proves that the closed-loop spectrum under (3.35) differs from the open-loop one only in moving Qncharacteristic roots belonging to spec. QA/ to spec. QAC QB QF /, as expected.
Thus, to stabilize (3.33) we need to solve the left characteristic matrix equation (3.34) for QA containing
all roots of �� .s/ that are required to be shifted. This set must clearly include all unstable characteristic
roots of (3.33), i.e. those in xC0, but might contain additional modes. There are only a finite number
of such roots and finding them is currently a well understood numerical problem. The left characteristic
matrix equation can also be solved numerically, see [12] and some later developments of the same authors.
Note that spectral properties of QA effectively define that matrix unambiguously, because any similarity
transformation of QA just leads to an appropriate scaling of Q and R. If D´w D 0, which corresponds to
a retarded system, the second column of (3.34) yields the closed-form R D e� QA�QBw and then a simpler
version of the first column,
QAC e� QA�QBwC´ D QAQ; (3.340)
to be solved in QA and Q, and then QB D QBu C e� QA�QBwD´u. We illustrate the procedure with a simple
example, which represents a rare case when analytic solution to the stabilization problem is possible.
Example 3.1. Consider the system
�
Px.t/´.t/
�
D�
�1 1
1 0
� �
x.t/
´.t � �/
�
C�
1
0
�
u.t/
(i.e. Px.t/ D �x.t/C x.t � �/C u.t/ in form (1.29)), whose characteristic function
�� .s/ D det
�s C 1 �e��s
�1 1
�
D s C 1 � e��s :
First, prove that this quasi-polynomial has no roots in C0. To this end, note that the magnitude relation
corresponding to the characteristic equation s C 1 D e��s reads js C 1j D je��sj. This equality is con-
tradictory in Re s > 0 because its left-hand side is larger than 1 and its right-hand side is smaller than 1
there. Regarding pure imaginary roots, the magnitude relation for s D j! reads j1 C j!j D 1, which is
only solvable for ! D 0. Thus, �� .s/ can have only unstable roots at the origin. It is easy to verify that
�� .0/ D 0 for all � , indeed. This is a simple root, which can be seen from
lims!0
�� .s/
sD 1C lim
s!0
1� e��s
sD 1C lim
s!0
�e��s
1D 1C � ¤ 0:
Thus, to stabilize this system we only need to move its single pole at the origin. This suggests the choice
of Qn D 1 and QA D 0. Because D´w D 0 in this case, we only solve (3.340) which reads �QCQ D 0 and
is thus solvable by any Q ¤ 0. This yields R D Q and QB D Q, so that the reduced system is PQx D Qu.t/,
56 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
which is a plain integrator. This system is stabilizable and the control law u.t/ D �k=Q Qx.t/ for k > 0
stabilizes it, assigning its pole to s D �k. But this implies that the control law
u.t/ D Fvv.t/� k
�
x.t/CZ t
t��
x.r/dr
�
assigns
��;cl D .s C k/s C 1 � e��s
s;
which is an entire function, although not a finite quasi-polynomial. Note that the control law above is
independent of the choice of Q. O
In some special cases, when the open-loop characteristic function �� .s/ is a plain polynomial, the
left characteristic matrix equation can be solved analytically. For instance, if applied to the input-delay
system, whose state-space realization was discussed in Example 2.1 on p. 21 and for which ´ D u, it reads
�
Q R��
A B
0 0
�
D� QAQ e
QA�R�
:
If we choose Qn D n, then this equation is obviously solved by QA D A, Q D I , and R D e�A�B , so thatQB D B and we end up with the control law (3.22), as expected.
Another special case with a relatively simple solution is that where D´w D 0 and the matrices A and
BwC´ have the following structure:
A D
2
666664
� � � � � � �0 � � � � � �::::::: : :
::::::
0 0 � � � � �0 0 � � � 0 �
3
777775
and A� ´ BwC´ D
2
666664
0 � � � � � �0 0 � � � � �::::::: : :
::::::
0 0 � � � 0 �0 0 � � � 0 0
3
777775
;
where the asterisk stands for a potentially nonzero element. This class of DDEs is a particular case of re-
tarded systems described by (1.29), where the open-loop characteristic function is still a plain polynomial.
We can then always choose Qn D n and Q D I , so that (3.340) reads
QA � A D e� QA�BwC´: (3.3400)
This equation can be solved iteratively in an upper-triangular QA, column by column, starting from the first
column. As a matter of fact, this immediately yields that Qai i D ai i for all i 2 Z1::n. We provide a flavor
of this procedure via an example.
Example 3.2. Consider the problem of stabilizing a chain of n integrators connected in series with equal
delays in between, see Fig. 3.7. This setup can be viewed as a baby-platooning problem, with no con-
straints in information exchange between involved vehicles. Assuming that all states of the integrators are
measurable, this system can be described by the interconnection in Fig. 1.7 with
G D�
Jn=s en
In=s 0
�
D
2
4
0 In 0
Jn 0 en
In 0 0
3
5 ;
where ei stands for the i th standard basis of Rn,
Jn ´�
0 e1 � � � en�1
�
D
2
6664
0 1 � � � 0::::::: : :
:::
0 0 � � � 10 0 � � � 0
3
7775
3.3. PROBLEM-ORIENTED CONTROLLER ARCHITECTURES: CONTROL-THEORETIC INSIGHT 57
uxnxn�1x2x1e��s1
se��s1
se��s1
s
Fig. 3.7: Chain of n integrators with communication delays
is the n-dimensional Jordan block corresponding to the zero eigenvalue, and state components
x.t/ D
2
6664
x1.t /:::
xn�1.t /
xn.t /
3
7775
2 Rn and ´.t/ D
2
6664
x2.t /:::
xn.t /
u.t/
3
7775
2 Rn:
The characteristic function of this system, �� .s/ D sn, is expectably a plain polynomial.
Equation (3.3400) for this system reads QA D e� QA�Jn and can equivalently be presented as the recursion
QAe1 D 0 and QAei D e� QA�ei�1; i 2 Z2::n:
This relation defines an upper-triangular nilpotent QA, whose i th column coincides with the .i�1/th column
of its matrix exponential. At the time we evaluate the i th column of QA all previous columns are already
determined. But the upper-triangular structure of QA implies that the first i � 1 columns of its exponential
depend only on its first i � 1 columns of QA. Therefore, the recursion above can be implemented.
To illustrate the procedure, consider it in details for n D 3. In this case the steps of the recursion above
are as follows:
1: QAe1 D 0 H) e� QA�e1 D exp
0
@
2
4
0 � �0 � �0 � �
3
5 �
1
A e1 D
2
4
1 � �0 � �0 � �
3
5 e1 D e1
2: QAe2 D e� QA�e1 D e1 H) e� QA�e2 D exp
0
@
2
4
0 �1 �0 0 �0 0 �
3
5 �
1
A e2 D
2
4
1 �� �0 1 �0 0 �
3
5 e2 D e2 � �e1
3: QAe3 D e� QA�e2 D e2 � �e1;
where “�” stands for not yet evaluated elements. Having determined QA, we can find its matrix exponential
eQAt D exp
0
@
2
4
0 1 ��0 0 1
0 0 0
3
5 t
1
A D
2
4
1 t t.t � 2�/=20 1 t
0 0 1
3
5 ;
required for
QB D e� QA�BwD´u D
2
4
1 �� 3�2=2
0 1 ��0 0 1
3
5 en D
2
4
3�2=2
��1
3
5
and the matrix function of r
eQA.t�r/R D e
QA.t�r��/Bw D
2
4
1 t � r � � .t � r � �/.t � r � 3�/=20 1 t � r � �0 0 1
3
5 ;
which appears in the control law (3.35). As . QA; QB/ is controllable, we can find QF assigning the eigenvalues
of QA C QB QF arbitrarily. Choosing the closed-loop characteristic function as ��;cl.s/ D .s C ˛/3 for some
˛ > 0, the application of Ackermann’s formula yields
QF D ��
˛3 ˛2.2�˛ C 3/ ˛..�˛ C 3/2 � 3/=2�
:
58 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
u
vo
viy
P
K
(a) General plant
u
vo
viy
P xD�
K
(b) Dead-time plant
u
vo
viy
P
K xD�
(c) Delayed controller
Fig. 3.8: Internal stability setups
The control law (3.35) is then of the form
u.t/ D Fvv.t/� ˛3x1.t / � ˛2
�
.2�˛ C 3/x2.t /C ˛
Z t
t��
x2.r/dr
�
� ˛�.�˛ C 3/2 � 3
2x3.t / � ˛
Z t
t��
�
˛r � 3� ˛.t C �/�
x3.r/dr
�
� ˛Z t
t��
�˛2
2r2 � .3C ˛t/˛r C .�˛ C 3/2 � 3
2
�
u.r/dr
and it assigns all three closed-loop poles to s D �˛.
For a general n the formulae are more involved, but an analytic form of QA and its exponential can still
derived. Namely, it can be shown that
QAe1 D 0 and QAei D �i�1X
j D1
.j � i/i�j �2
.i � j � 1/Š �i�j �1ej ; i 2 Z2::n
and
eQAtei D
iX
j D1
t .t � .i � j /�/i�j �1
.i � j /Š ej ; i 2 Z1::n
(mind that 0Š D 1, as customary), which are both Toeplitz. O
3.4 Loop shifting and all stabilizing controllers
In this section another technique is discussed, which shows that the dead-time compensation is actually an
intrinsic part of every stabilizing controller. This offers further insight into the stabilization of input-delay
systems and justification of the DTC architecture.
3.4.1 Internal stability and loop shifting
We start with describing some general ideas. Consider the feedback interconnection in Fig. 3.8(a), where
P and K are LTI plant and controller, respectively. It defines the relation�
I �P�K I
� �
y
u
�
D�
I
0
��
I P�
�
vo
vi
�
: (3.36)
We say that this interconnection is well posed if
there is ˛ > 0 such that k.I � P.s/K.s//�1k < 1 for all s 2 C˛:
If both P andK are finite dimensional, then this condition reads as the non-singularity of I�P.1/K.1/.
In any case, well posedness requires the properness of .I � P.s/K.s//�1 and implies that the transfer
function �I �P.s/
�K.s/ I
��1
D�0 0
0 I
�
C�
I
K.s/
�
.I � P.s/K.s//�1�
I P.s/�
;
3.4. LOOP SHIFTING AND ALL STABILIZING CONTROLLERS 59
u
vo
viy
P
˘
˘
K
-
(a) Equivalent of Fig. 3.8(a)
vo
vi
˘vi
y u
P
˘
˘
K
- -
(b) Moving summing and pickoff
Qvo
viQy u
P
˘
˘
K-
(c) Qy D yC˘u and Qvo D vo �˘vi
Qu
Qvo
QviQy
P C ˘
K.I C ˘K/�1
(d) Qu D u and Qvi D vi
Qu
Qvo
QviQy
QP
QK
(e) Equivalent final loop
Fig. 3.9: Loop shifting by ˘ , with QP ´ P C˘ and QK ´ K.I C˘K/�1
which can be derived by (A.3), is proper.
The feedback interconnection in Fig. 3.8(a) is said to be internally stable if all four systems�
vovi
�
7!�
yu
�
are stable. The reason for considering all possible closed-loop systems, rather than only one of them,
lies in the need to rule out stabilization of certain closed-loop systems via unstable cancellations in the
loop. It is readily seen that the system is internally stable iff�
I �P�K I
��1 �
I
0
��
I P�
D�
I
K
�
.I � PK/�1�
I P�
µ�
So Td
Tc Ti
�
2 H1;
where So is the output sensitivity, Ti is the input complementary sensitivity, Td is the disturbance sensitiv-
ity, and Tc is the control sensitivity systems, aka the Gang of Four. It is worth emphasizing that internal
stability requires the system to be well posed (otherwise, So cannot be in H1) and K.s/ to be proper
(otherwise, Tc cannot be in H1).
Consider now the sequence of block-diagram transformations presented in Fig. 3.9. They transform
the system in Fig. 3.8(a) to that in Fig. 3.9(e) by redistributing dynamics between its elements on the basis
of a chosen linear ˘ . The transformation has different effects on the parts of the loop, it adds ˘ in
parallel to the plant P and in feedback to the controller K. Because these manipulations do not alter the
loop itself, we can expect that properties of the original setup in Fig. 3.8(a) can be analyzed in terms of
the transformed (shifted) setup in Fig. 3.9(e). The hope is that the latter might be easier to analyze for an
appropriate choice of˘ . This is an old idea, see [79, Sec. 6.2] or [11, Sec. III.6] and the references therein,
which has been extensively used in nonlinear control, e.g. to introduce a required degree of passivity to
the plant.
Two technical aspects of the loop-shifting technique have to be kept in mind. First, the elements of the
shifted loop should themselves be well posed. This is obviously always true for QP D P C˘ , but might
require extra attention for the feedback QK D K.I C˘K/�1. Second, the manipulations in Fig. 3.9 alter
both exogenous and internal signals of the loop, in the way that the relation between signals of interest in
Fig. 3.8(a) and their counterparts in Fig. 3.9(e) are�
QyQu
�
D�
I ˘
0 I
� �
y
u
�
and
�
Qvo
Qvi
�
D�
I �˘0 I
� �
vo
vi
�
: (3.37)
They follows by standard block-diagram manipulation rules in the transition from Fig. 3.9(b) to Fig. 3.9(c).
These relations suggest that ˘ itself should be kept stable in order to preserve internal stability under loop
60 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
shifting. But apart from the limitations above, the choice of˘ is arbitrary, which renders the method quite
flexible in its abilities to affect properties of QP .
Our interest is to exploit those abilities in the context of dead-time plants of the form P xD� . To be
specific, the goal is to construct ˘ with which QP D P xD� C ˘ is finite dimensional. This idea can be
traced back to [7], where it was applied to a wider class of infinite-dimensional systems. Implicitly, it is
also related to the earlier dead-time compensation developments in [77] discussed in ÷3.2.1, cf. (3.15).
3.4.2 Preliminary: truncation and completion operators
Consider a finite-dimensional system G given by its state-space realization G.s/ D D C C.sI � A/�1B
(not necessarily minimal). Its impulse response is g.t/ D Dı.t/ C C eAtB1.t /. By the FIR truncation
operator ��
˚
G
associated with this G and a constant � > 0 we understand the system, whose impulse
response is the truncation of g.t/ to the interval Œ0; � �. This operator can be visualized as the mapping
��
˚
G
Wt0 �
7�!t0 �
: (3.38)
Formally, the impulse response of this system is
g.t/�
1.t / � 1.t � �/�
D g.t/ � C eAtB1.t � �/ D g.t/ � C eA� eA.t��/B1.t � �/:
The last term above can be recognized as the impulse response of the dead-time system OG xD� , where
OG.s/ D�
A B
C eA� 0
�
D�A eA�B
C 0
�
: (3.39)
This immediately yields one possible representation of the transfer function of the truncation operator,
��
��A B
C D
��
D G.s/ � OG.s/e��s D�A B
C D
�
��
A B
C eA� 0
�
e��s (3.40a)
It can be verified by direct substitution that all singularities of the function above, which are the eigen-
values of A, are removable. But the same function can be represented in a different form, where no
singularities are present at all. This representation follows directly from the Laplace transform of the
impulse response of ��
˚
G
, i.e. LfDı.t/C C eAtB1Œ0;��.t /g. Straightforward use of (B.16) yields then
��
��A B
C D
��
D D C C
Z �
0
e�.sI�A/tdtB: (3.40b)
This is an entire function and, because ke�.sI�A/tk < keAt k for all Re s > 0, it is bounded in C0. Hence,
��
˚
G
2 H1 regardless of the stability property of G.
A dual, in a sense, operator to the truncation is the FIR completion operator ��
˚
G xD�
, associated with
the input-delay alteration of G. It is defined as the truncation of the delay-free system QG, whose impulse
response matches that of G xD� in the whole interval .�;1/, and can be viewed as the mapping
��
˚
G xD�
Wt0 �
7�!t0 �
: (3.41)
This QG can be determined from the impulse response equality
Qg.t/ D g.t � �/ D Dı.t � �/C C eA.t��/B1.t � �/ D C e�A� eAtB; 8t > �:
3.4. LOOP SHIFTING AND ALL STABILIZING CONTROLLERS 61
A finite-dimensional QG satisfying this equality has the transfer function
QG.s/ D�
A B
C e�A� 0
�
D�A e�A�B
C 0
�
: (3.42)
By definition,
��
��A B
C D
�
e��s
�
D ��
��A B
C e�A� D
��
� .1� e��s/D;
so the following two representations follow from (3.40) via the substitution C ! C e�A� :
��
��A B
C D
�
e��s
�
D QG.s/ �G.s/e��s D�
A B
C e�A� 0
�
��A B
C D
�
e��s (3.43a)
D C e�A�
Z �
0
e�.sI�A/tdtB � e��sD (3.43b)
and we again have that ��
˚
G xD�
2 H1 regardless of the stability of G. Comparing this expression with
(3.15), we can conclude that the MSP of Watanabe–Ito is effectively ��
˚
P xD�
for P with a strictly proper
transfer function.
As was already discussed in Remark 1.4 on p. 6, the FIR systems ��
˚
G
and ��
˚
G xD�
are sometimes
referred to as distributed-delay systems. Although this terminology is avoided throughout the notes, one
can encounter it in the literature.
3.4.3 Loop shifting for dead-time systems
Return now to the internal stability problem. Consider the input-delay version of the system presented in
Fig. 3.8(a), where the plant is of the form P xD� , see Fig. 3.8(b). With the loop-shifting procedure in mind
and a systematic way to produce stable systems ˘ D ��
˚
P xD�
rendering P xD� C ˘ finite dimensional,
the following result can be formulated:
Theorem 3.2. K internally stabilizes the dead-time system in Fig. 3.8(b) iff
K D QK�
I � ��
˚
P xD�
QK��1; (3.44)
for some QK internally stabilizing the finite-dimensional QP D P xD� C ��
˚
P xD�
in the setup of Fig. 3.9(e).
Proof. The result follows from the loop shifting procedure with˘ D ��
˚
P xD�
. Because QP .s/ constructed
according to (3.42) is strictly proper, we have that lim˛!1 sups2C˛k˘.s/k D 0, meaning that the feedback
in the interconnection of QK is well posed whenever K.s/ is proper.
Another, algebraic, way to see the equivalence is via rewriting (3.36) as
�I 0
0 I CK˘
� �I � QP
� QK I
� �QyQu
�
D�I
0
��
I QP�
�Qvo
Qvi
�
;
where (3.37) and the formula QK D .I C K˘/�1K are used. It follows from the fact that the chosen
˘ 2 H1 that the boundedness of y and u for all bounded vo and vi is equivalent to that of their “tilded”
versions in (3.37).
It is perhaps not surprising now that the controller defined by (3.44) is exactly of the DTC-based form
presented in Fig. 3.6, with the primary controller QC D � QK and the DTC element˘ D ��
˚
P xD�
. However,
unlike the clever guesses in Section 3.2 or the use of state-feedback based architectures in Section 3.3, this
architecture shows up as an outcome of an abstract procedure, not connected with any particular choice of
62 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
yu
Q� Q�
� QJ11QJ12
QJ21QJ22
�
˘
Q
Fig. 3.10: All stabilizing controllers for P xD�
the controller structure. We thus may claim that every stabilizing controller for the dead-time plant P xD�
can be cast as a dead-time compensator.
This claim is further strengthened by the fact that we can parametrize all stabilizing controllers QKfor the finite-dimensional QP . This parametrization, known as the Youla–Kucera parametrization, can be
presented in several different forms, see [44, Sec. 6.1] for details. We essentially need only one of them,
that given in the result below.
Theorem 3.3 (delay-free Youla–Kucera parametrization). Consider an LTI plant P , given in terms of its
stabilizable and detectable realization P.s/ D C.sI � A/�1B . A controller K internally stabilizes P iff
K D Fl.J;Q/ for
J.s/ D�
J11.s/ J12.s/
J21.s/ J22.s/
�
D
2
4
AC BF C LC �L B
F 0 I
�C I 0
3
5 ; (3.45)
where F and L are any matrices such that AC BF and AC LC are Hurwitz, and some stable Q.
The application of this result to the dead-time plant P xD� is straightforward in the light of the result
of Theorem 3.2. Indeed, that result establishes that the stabilization of P xD� reduces to that of the finite-
dimensional QP , whose state-space realization is QP .s/ D C e�A� .sI � A/�1B , as in (3.42). If the original
realization of P is stabilizable and detectable, then so is the realization of QP above. Hence, the set of all
stabilizing controllers for QP is QK D Fl. QJ ;Q/, where
QJ .s/ D� QJ11.s/ QJ12.s/
QJ21.s/ QJ22.s/
�
D
2
4
AC BF C eA�LC e�A� �eA�L B
F 0 I
�C e�A� I 0
3
5 : (3.46)
Combining this result with that Theorem 3.2 we end up with a complete parametrization of all stabilizing
controllers in the form depicted in Fig. 3.10. This is again a DTC-based controller.
3.4.4 Potential extensions
The loop shifting idea is not limited to the transformations of Fig. 3.9. An obvious alteration is to change
the direction of the “˘” blocks in Fig. 3.9(a) or, equivalently, swap P with K and adjust involved signals
appropriately. With the now familiar constraints on the stability of ˘ and the well posedness of the
internal feedback loop, such a transformation would again convert the stabilization problem for the system
in Fig. 3.8(a) to that in Fig. 3.9(e), but now with
QP D P.I C˘P/�1 and QK D K C˘:
This kind of loop shifting can be advantageous for some classes of systems with internal delays. For
example, let P.s/ D 1=.1 � ae��s/ and apply the transformation above with ˘.s/ D ae��s , which is
3.5. DELAY AS A CONSTRAINT: EXTRACTION 63
vo
vi
w2 ´2
y
u
´1w1´1
w2
P ˚ ˚�1K
Fig. 3.11: General loop skewing
obviously stable and for which the feedback loop around the plant is well posed. With this choice QP.s/ D 1
is rational. Of course, the “forward” and “backward” shifts can be combined, by carrying out successively.
Another class of loop transformations are so-called multipliers, see [79, Sec. 6.1] or [11, Sec. VI.9].
Their basic idea is that the cascadeMM�1 can be introduced at any point of the loop and then split without
affecting stability if the multiplier M is bi-stable.
A yet more general transformation is shown in Fig. 3.11. The diagram there is rotated 90ı counter-
clockwise to present the loop in the so-called chain-scattering (implicit) form, for which the formulae are
simpler. But now ˚ , as well as ˚�1, represents a relation between a mixture of inputs and outputs, rather
than the more conventional i / o relation. For instance, the relation
�
´1
w2
�
D�
˚11 ˚12
˚21 ˚22
� �
w1
´2
�
”�
´1
´2
�
D�
˚11 � ˚12˚�122 ˚21 ˚12˚
�122
�˚�122 ˚21 ˚�1
22
� �
w1
w2
�
;
provided ˚22 is square and invertible. The second equality above is the conventional i / o relation. The loop
shifting of Fig. 3.9 corresponds in this setting to the choice ˚ D�
I 0�˘ I
�
. An example of a nontrivially
more general loop skewing is the scattering transformation of [1], popular in bilateral teleoperation. It
involves two sets of transformations of the form presented in Fig. 3.11, at the master and slave sides, both
with
˚ D� p
b=2p
b=2
�1=p2b 1=
p2b
�
;
where b > 0 is the characteristic impedance of the transmission line. This transformation renders delayed
transmission lines passive. General stability preservation conditions, as well as relations between original
and transformed signals of interest, for the transformation in Fig. 3.11 are more involved though.
3.5 Delay as a constraint: extraction
As if a plethora of ideas presented up to this point were not enough, the chapter is concluded with yet
another approach to characterize all stabilizing controllers for dead-time systems. The idea here is to treat
the loop delay as a causality constraint imposed upon the controller, rather than a part of the plant. The
applicability scope of this approach is thus essentially limited to dead-time systems, where the delay can
be freely moved between the plant and the controller. At the same time, the approach extends to various
optimal control problems in a relatively straightforward manner, which renders it a powerful design tool
in many applications.
Before starting to expose the approach, a technical result is required.
Lemma 3.4. If G.s/ is proper, i.e. uniformly bounded on C˛ for a sufficiently large ˛ > 0, then
G 2 H1 ” G� ´ G xD� 2 H1:
Proof. The implication G 2 H1 H) G� 2 H1 is obvious by the stability of the delay element. To prove
the other direction, assume, on the contrary, that G� 2 H1, whereas G is not. It follows from the relation
G.s/ D G� .s/e�s and the fact that e�s is entire that G.s/ is holomorphic in C0 [63, Rem. 10.3]. Because
64 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
G.s/ is proper, there is ˛ > 0 such that it is uniformly bounded on C˛ . Hence, for G 62 H1, the function
G.s/ must be unbounded in the strip C0 n C˛ . But in that strip kG.s/k D je�s jkG�.s/k � e˛� kG�.s/k,
which is a contradiction.
Consider the internal stability setup in Fig. 3.8(c). Strictly speaking, this system is not equivalent to
that in Fig. 3.8(b), because the delay element xD� , which is a multiplier moved from one part to another, is
not bi-stable. Nonetheless, with mild well-posedness assumptions their stability properties are equivalent.
Indeed, it is readily verified that
�
So,3.8(b) Td,3.8(b)
Tc,3.8(b)xD� Ti,3.8(b)
�
D�
So,3.8(c) Td,3.8(c)xD�
Tc,3.8(c) Ti,3.8(c)
�
:
Thus, if P.s/, K.s/, and .I � P.s/K.s/e��s/�1 are proper, the stability of the system in Fig. 3.8(b) is
equivalent to that in Fig. 3.8(c) by Lemma 3.4.
The idea of the extraction approach is that the set of all delayed stabilizing controllers K xD� is a subset
of the set of all causal stabilizing controllers, say K0. Hence, we may start with characterizing the latter
set first and then extract from it all �-delayed controllers. Because P is finite dimensional, Theorem 3.3
yields a complete parametrization of causal stabilizing controllers for it, which is K0 D Fl.J;Q/ for J
given by (3.45) and an arbitrary Q 2 H1. Thus, all we need is to characterize all stable Q such that
Fl.J;Q/ D K xD� for a causal K, i.e. is a dead-time system.
To this end, let J be an invertible finite-dimensional LTI system having a 2 � 2 partition with square
J12 and J21 sub-blocks. Define
H ´ J�1 D�
H11 H12
H21 H22
�
;
where the partition is compatible with that of J , and bring in the decompositions
H22 D ��
˚
H22
C OH22xD� and H11
xD� D QH11 � ��
˚
H11xD�
;
where OH22 and QH11 have strictly proper transfer functions and can be obtained by (3.39) and (3.42),
respectively. We have then the following key technical result:
Lemma 3.5. If J is such that J12 and J21 are invertible, then a causal Q renders Fl.J;Q/ D K xD� under
a causal K iff
Q D ��
˚
H22
C QQ xD�
for a causal QQ. Moreover, all attainable K’s are then in the form shown in Fig. 3.10 with
QJ D� QH11 H12
H21OH22
��1
and an arbitrary causal QQ.
Proof. The invertibility of J and its .1; 2/ and .2; 1/ sub-blocks guarantees [44, Prop. 5.6] that the mapping
Q 7! Fl.J;Q/ is bijective whenever it is well posed, with K0 D Fl.J;Q/ ” Q D Fu.J�1; K0/. In
other words, K0 D K xD� iff
Q D H22 CH21.I �K xD�H11/�1K xD�H12 D H22 CH21.I �KH11
xD� /�1KH12
xD� ;
where the latter equality follows by the time-invariance ofH11 andH12. It is readily seen that the response
of this Q to the impulse applied at every tc coincides with that of H22 in the whole Œtc; tc C �� for every
causal K. This suggests that the required Q can be connected with the truncation of H22.
3.5. DELAY AS A CONSTRAINT: EXTRACTION 65
To prove that formally, consider first the system
.I �KH11xD� /
�1K D .I �K��
˚
H11
�K QH11/�1K D .I � QK QH11/
�1 QK;
whereQK ´ .I �K��
˚
H11
/�1K
By arguments used in ÷3.4.3, we know that for any causal K the mapping K 7! QK is bijective and well-
posed for all causal K (thus resulting in a causal QK). Thus, we can equivalently rewrite the relation for Q
in terms of QK as
Q D H22 CH21.I � QK QH11/�1 QKH12
xD� D ��
˚
H22
C� OH22 CH21.I � QK QH11/
�1 QKH12
� xD� :
The systemQQ ´ OH22 CH21.I � QK QH11/
�1 QKH12 D Fu. QJ�1; QK/
is well posed (because QH11.s/ is strictly proper) for all causal QK and is thus causal whenever so is QK.
The next step is to show that the mapping QK 7! Fu. QJ�1; QK/ is bijective. To this end, consider the
equality�
J11 J12
J21 J22
� �
H11
H21
�
D�
I
0
�
:
Its first row yields that H11 D �J�121 J22H21 and then the second raw—that H�1
21 D J12 � J11J�121 J22.
Because J12 � J11J�121 J22 is well defined, by assumption, we conclude that H21 is invertible. The invert-
ibility of H12 can be shown by similar arguments. But this implies that QJ possesses the same properties
as J itself and the statement follows.
Thus, we showed that everyQ renderingK0 D K xD� is of the form ��
˚
H22
C QQ xD� and that any causalQQ can be attained by a causal QK. The result then follows by the relation K D QK.I C ��
˚
H11xD�
QK/�1,
which represents the system in Fig. 3.10 for QK D Fl. QK; QQ/.
The result of Lemma 3.5 applies to general, possibly time-varying controllers. In the time-invariant
case, we just need to consider time-invariant QQ. Then the result of Lemma 3.4 and the fact that ��
˚ OH22
2H1 can be used to prove that QQ 2 H1 ” Q 2 H1 and thus that the system in Fig. 4.1(b) character-
izes all stabilizing controllers for P xD� .
To conclude, construct a state-space realization of QJ from that of J given by (3.45). It is readily seen
that
J�1.s/ D
2
4
A �B L
�C 0 I
F I 0
3
5 :
Hence, by (3.39) we have that
H22.s/ D�A L
F 0
�
H) OH22.s/ D�A eA�L
F 0
�
and by (3.42)—that
H11.s/ D�A �B
�C 0
�
H) QH11.s/ D�
A �B�C e�A� 0
�
:
Hence,
� QH11.s/ H12.s/�
D�
A �B eA�L
�C e�A� 0 I
�
and�
H21.s/ OH22.s/�
D�A �B eA�L
F I 0
�
66 CHAPTER 3. STABILIZATION OF TIME-DELAY SYSTEMS
where the realization for H12 is obtained by a similarity transformation of that in J�1, from which
QJ�1.s/ D
2
4
A �B eA�L
�C e�A� 0 I
F I 0
3
5 :
Applying (B.23) we then end up exactly with the realization (3.46), matching the results derived by the
loop-shifting approach in ÷3.4.3.
Chapter 4
Performance of Time-Delay Systems
ALTHOUGH STABILITY is a crucial property, it is seldom the ultimate goal of control design. Rather,
control systems are supposed to impose a required behavior on controlled systems, like good com-
mand following, sufficient disturbance attenuation level, required transient characteristics, et cetera. These
properties are known collectively as control performance. There are zillions of ways to express perfor-
mance of control systems, from classical loop-shaping and pole-placement ideas to more modern quanti-
tative characteristics based on system norms or Lyapunov-based decay estimates.
This chapter studies optimization-based control performance in dead-time systems, mainly in H2 and
H1 formulations, which is mostly motivated by my own preferences. Some preliminary familiarity with
optimization-based control is expected, albeit the exposition attempts to avoid getting too deep into solu-
tion technicalities. Rather, the main emphasis is placed on effects of loop delays on the controller structure
and on attainable performance levels. The last section addresses certain aspects of the use of DTC-based
architectures for industrial controllers (PI), with mostly qualitative arguments.
4.1 Standard H2 and H1 problems
The standard problem corresponds to the lower LFT configuration depicted in Fig. 4.1(a). The system G
there is known as the generalized plant and K is referred to as the controller. The goal is then to design
K to reduce the effect of w on ´ in the system T´w ´ Fl.G;K/ W w 7! ´. The generalized plant has two
inputs, w and u, and two outputs, ´ and y, whose meaning is spelled out below.
w is dubbed the exogenous input and contains all exogenous signals, whose effect is important for the
problem at hand. These may be a reference signal, a load disturbance, measurement noise, et cetera.
In many cases these are fictitious (normalized) signals forming the actual signals of interest.
´ is dubbed the regulated output and contains signals that are required to be kept “small” (in whatever
sense). These may be deviations from a required behavior, like tracking or estimation errors, actuator
signals and suchlike, weighted to focus their relative importance and important aspects of them.
y is the measured output, through which the controller acquires the information about the effect of w
on the system behavior.
u is the control input, which is the signal generated by the controller and through which the effect of w
on ´ can be affected.
The generalized plant G contains then dynamics of a controlled plant itself, sensors, and actuators (all,
normally, in its u 7! y part), as well as weighing functions, and even some fixed parts of the controller
(e.g. the integral action). For more details and examples see [44, Ch. 7].
67
68 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
G
K
w´
y u
(a) General form
G
K xD�
w´
y u
(b) Delayed controller
Fig. 4.1: The standard problem setup
The reduction of the effect of w on ´ can be quantified by a norm of T´w . Below we consider the H2
and H1 system norms for this purpose.
� The H2-norm of a causal LTI system G W u 7! y, whose impulse response g.t/ is square integrable,
equals
kGk2 D�1
2�
Z
R
kG.j!/k2F d!
�1=2
D�Z
RC
kg.t/k2F dt
�1=2
; (4.1)
where k�kF denotes the Frobenius matrix norm. It has also a stochastic interpretation: if u.t/ is a
unit-intensity white Gaussian process, then kGk22 equals the steady-state variance of y.t/. The space
of all p � m systems with a bounded H2-norm is known as Hp�m2 (the dimensions are frequently
skipped). It comprises all causal systems with square-summable impulse responses. This is a Hilbert
space, with the inner product
hG1; G2i2 ´ 1
2�
Z
R
tr�
ŒG2.j!/�0G1.j!/
�
d! DZ
RC
tr�
Œg2.t /�0g1.t /
�
dt : (4.2)
TheH2-norm normally fits applications where properties of exogenous signals are known sufficiently
well. Examples of the standard H2 problem are the infinite-horizon LQG and steady-state Kalman–
Bucy filtering problems.
� The H1-norm of a causal LTI system G W u 7! y, whose transfer function G 2 H1 as defined by
(B.17), equals
kGk1 D ess sup!2R
kG.j!/k;
which is the peak gain of the frequency response of G. Its time-domain interpretation is the induced
L2.RC/-gain of G, i.e. kGk1 D sup0¤u2L2.RC/ kyk2=kuk2, where k�k2 stands for the L2.RC/-norm
(the square root of its energy) of a signal. This norm fits applications where properties of exogenous
signals are unknown, as well as robust stability problems. Examples of the standard H2 problem are
the weighted- and mixed-sensitivity problems and the gap optimization.
It should be emphasized that H1 is the stability space of LTI system, so the H1 formulation clicks
well with the stability requirement. At the same time, not all systems with a finite H2 norm are stable
(although this is the case for finite-dimensional systems). That implies that some care should be taken for
in the stability analysis of H2 problems for infinite-dimensional systems.
An important property of both H2 and H1 standard problems is that there are complete parametriza-
tions of all their suboptimal solutions. In both of these cases, the parametrizations can be expressed in the
form
K D Fl.J;Q/ (4.3)
for some LTI generator of all suboptimal controllers J , which depends on the problem data, and a free
parameter Q, which is only limited to be stable and norm bounded. The bound on kQk depends on the
performance that we expect to attain. Moreover, the generator J is square and invertible, with square and
4.1. STANDARD H2 AND H1 PROBLEMS 69
invertible .1; 2/ and .2; 1/ sub-systems. As we already saw in Section 3.5, these properties assure that the
mapping Q 7! K is bijective. This is a key property for solving delayed versions of the problem, which
is presented in Fig. 4.1(b).
4.1.1 State-space formulae
If the generalized plant G in Fig. 4.1(a) is finite dimensional, solutions can be developed in terms of its
state-space realization. Below such solutions are presented.
Standard assumptions
We assume that G is be given in terms of its state-space realization representing the joint dynamics of its
components, i.e. as
G.s/ D�G´w.s/ G´u.s/
Gyw.s/ Gyu.s/
�
D
2
4
A Bw Bu
C´ D´w D´u
Cy Dyw Dyu
3
5 ; (4.4)
where the input and output partitions are compatible with those in Fig. 4.1(a). It is conventional then to
impose the following assumptions on its parameters:
A5: the pair .A; Bu/ is stabilizable,
A6: the pair .Cy; A/ is detectable,
A7: the realization .A; Bu; C´; D´u/ has no invariant zeros in jR and D0´uD´u > 0,
A8: the realization .A; Bw ; Cy; Dyw/ has no invariant zeros in jR and DywD0yw > 0.
Assumptions A5,6 are necessary and sufficient conditions for the existence of an internally stabilizing
controller, so they are naturally required. Assumptions A7,8 are technical and are required to guaran-
tee the solvability of two involved algebraic Riccati equations. The constraints that they impose on the
corresponding feedthrough terms just say that all components of the control input u are penalized in the
cost function (the full column rank of D´u) and that all measurement channels are corrupted by noise
(the full row rank of Dyw). Normally, although still not always, assumptions A7,8 are necessary for the
corresponding optimization problems to be well defined.
All H2 suboptimal solutions for finite-dimensional generalized plants
The standard H2 problem for the system in Fig. 4.1 can be posed as the design on an internally stabilizing
K, which minimizes kT´wk2. Its solution is based on the following two algebraic Riccati equations:
A0X CXAC C 0´C´ � .XBu C C 0
´D´u/.D0´uD´u/
�1.B 0uX CD0
´uC´/ D 0 (4.5a)
and
AY C YA0 C BwB0w � .YC 0
y C BwD0yw/.DywD
0yw/
�1.CyY CDywB0w/ D 0; (4.5b)
whose solutions X and Y are said to be stabilizing if AC BuFu and AC LyCy are Hurwitz, respectively,
where
Fu ´ �.D0´uD´u/
�1.B 0uX CD0
´uC´/ and Ly ´ �.YC 0y C BwD
0yw/.DywD
0yw/
�1: (4.6)
If A5–8 hold true, then stabilizing solutions always exist, are unique, and such that X D X 0 � 0 and
Y D Y 0 � 0. Furthermore, X > 0 iff .A; Bu; C´; D´u/ has no invariant zeros in C n xC0 and Y > 0 iff
.A; Bw ; Cy; Dyw/ has no invariant zeros in C n xC0.
70 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
Theorem 4.1. Let A5–8 hold true and D´w D 0. If all stabilizing controllers are presented in the form
K D Fl.J2; Q/, where
J2.s/ D
2
4
AC BuFu C LyCy C LyDyuFu �Ly Bu C LyDyu
Fu 0 I
�Cy �DyuFu I �Dyu
3
5 (4.7)
with Fu and Ly as in (4.6), then kT´wk22 D kFl.G;K/k2
2 D 0 C kD´uQDywk22, where
0 ´ tr.B 0wXBw /C tr.C´YC
0´/C tr.XAY C YA0X/: (4.8)
The central controller, corresponding to Q D 0, is then the unique optimal controller, attaining 0.
Some remarks are in order:
Remark 4.1 (solution properties). The optimal K above is an observer-based controller comprised of the
LQR state feedback with the gain Fu and the Kalman–Bucy filter with the gain Ly . This separation is
remarkable and not quite obvious. At the same time, the optimal cost is not just a sum of the LQR (the
first term in the expression for 0) and the Kalman–Bucy (the second term) costs. It also contains the
coupling term tr.XAY C YA0X/, which might be both positive and negative, depending of properties of
A. To explain this fact, rewrite the optimal cost as
0 D tr.B 0wXBw /C tr.D´uFuYF
0uD
0´u/:
The first term above is still the LQR cost. The second term is the cost of estimating the signal v.t/ DD´uFux.t/ from the measured y.t/. The signal v is nothing but the LQR control law, normalized by the
its penalty in the cost function. O
All H1 suboptimal solutions for finite-dimensional generalized plants
The standard H1 problem for the system in Fig. 4.1 can be posed as the design on an internally stabilizing
K, which renders kT´wk1 < for a given > 0. Its solution is based on two algebraic Riccati equations
too, now of the form
A0X CXAC C 0´C´ �
�
X�
Bw Bu
�
C C 0´
�
D´w D´u
��
��
D0´wD´w � 2I D0
´wD´u
D0´uD´w D0
´uD´u
��1 ��
B 0w
B 0u
�
X C�
D0´w
D0´u
�
C´
�
D 0 (4.9a)
and
AY C YA0 C BwB0w �
�
Y�
C 0´ C 0
y
�
C Bw
�
D0´w D0
yw
��
��D´wD
0´w � 2I D´wD
0yw
DywD0´w DywD
0yw
��1 ��
C´
Cy
�
Y C�
D´w
Dyw
�
B 0w
�
D 0; (4.9b)
whose solutions are said to be stabilizing if ACBwFw CBuFu and ACL´C´ CLyCy are Hurwitz, where
�
Fw
Fu
�
´ ��
D0´wD´w � 2I D0
´wD´u
D0´uD´w D0
´uD´u
��1 ��
B 0w
B 0u
�
X C�
D0´w
D0´u
�
C´
�
(4.10a)
and�
L´ Ly
�
´ ��
Y�
C 0´ C 0
y
�
C Bw
�
D0´w D0
yw
���D´wD
0´w � 2I D´wD
0yw
DywD0´w DywD
0yw
��1
: (4.10b)
4.1. STANDARD H2 AND H1 PROBLEMS 71
Assumptions A5–8 are necessary for the solvability of these equations, but might not be sufficient if is
not sufficiently large. Moreover, even if AREs (4.9) do admit stabilizing solutions, these solutions might
not be positive semi-definite, also depending on . At the same time, null spaces of X and Y do not
depend on . We still have that det.X/ ¤ 0 iff .A; Bu; C´; D´u/ has no invariant zeros in C n xC0 and
det.Y / ¤ 0 iff .A; Bw ; Cy; Dyw/ has no invariant zeros in C n xC0. Also, the AREs in (4.9) reduce to the
corresponding H2 AREs in (4.5) as ! 1.
Theorem 4.2. If A5–8 hold true and kD´wk < , then the standard H1 problem is solvable iff the
following conditions hold:
(a) there is a stabilizing solution X to ARE (4.9a) such that X D X 0 � 0,
(b) there is a stabilizing solution Y to ARE (4.9b) such that Y D Y 0 � 0,
(c) �.XY / < 2.
In such a case, all -suboptimal, possibly infinite-dimensional, controllers are given by K D Fl.J1; Q/,where
J1.s/ D
2
4
A �Z Ly Z .Bu C L´D´u C LyDyu/
Fu 0 I
�Cy �DywFw �DyuFu I �Dyu �D
3
5 ; (4.11)
for any Q 2 H1 such that k.I � �2D´wD0´w/
�1=2D´uQDyw.I � �2D0´wD´w/
�1=2k1 < , where
A ´ AC BwFw C BuFu CZ Ly.Cy CDywFw CDyuFu/;
D ´ DywD0´w.
2I �D´wD0´w/
�1D´u, and Z ´ .I � �2YX/�1.
Some remarks are in order:
Remark 4.2 (solution properties). Although this fact is less evident than in the H2 case, the central subop-
timal controller, that corresponding to Q D 0, is also observer based. The resulting control signal is also
the H1 suboptimal estimate of the H1 state feedback control signal u D Fux (which would be admis-
sible under measuring the whole state of the plant iff condition (a) of Theorem 4.2 holds). In contrast to
the H2 (Kalman–Bucy) case, parameters of the H1 estimator do depend on the signal it estimates, so the
formulae are more involved and nontrivial transformations are required to decouple the estimator ARE,
which depends on the state-feedback gain, from Fu. The coupling condition (c) of Theorem 4.2 is actually
a remnant of this procedure. O
Remark 4.3 (simplifications). The formulae of Theorem 4.2 are greatly simplified in the case whenD´w D0. Indeed, in this case the AREs (4.9) read
A0X CXAC C 0´C´ � .XBu C C 0
´D´u/.D0´uD´u/
�1.B 0uX CD0
´uC´/C �2XB 0wBwX D 0 (4.9a0)
AY C YA0 C BwB0w � .YC 0
y C BwD0yw/.DywD
0yw/
�1.CyY CDywB0w/C �2YC´C
0´Y D 0; (4.9b0)
control and estimation gains are
Fu D �.D0´uD´u/
�1.B 0uX CD0
´uC´/ and Fw D �2B 0wX; (4.10a0)
Ly D �.YC 0y C BwD
0yw/.DywD
0yw/
�1 and L´ D �2YC 0´; (4.10b0)
and D D 0. Still, the assumption D´w D 0 does not fit such important cases as the mixed and balanced
sensitivity problems. O
72 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
ryu
d
yc
n
ym
1
sK -
Fig. 4.2: System for optimal control design examples (delay-free case)
4.1.2 Design case study
To illustrate the use of the formulae above and the implication of the choice of the generalized plant
on properties of the resulted closed-loop system, in this subsection we study a simple academic design
example and analyze the design of H2 and H1 controllers for it.
Consider the simple single-loop control system shown in Fig. 4.2. The plant there is a plain integrator,
subject to load disturbances d.t/. To compensate the effect of d on the controlled output yc, a feedback
loop is introduced on the basis of a measured version of the controlled output, ym D yc C n, where n.t/ is
a measurement noise signal. Our goal is to design a stabilizing controller K, which attenuates the effect of
the load disturbances on the controlled output, while not amplifying the effect of the measurement noise
on it. Also, we would like to achieve both these goals by a “reasonable” control effort.
The first step in constructing the generalized plant for this problem is to form the exogenous input and
regulated output the signals. The former is naturally taken of the form
w.t/ D�
d.t/
n.t/=&
�
;
where the weight & reflects the relative size of the measurement noise and the load disturbance. It is tuned
so that d and n=& have comparable intensities. As a result, a decrease of & implies putting more weight on
the load disturbance, while an increase of & renders the measurement noise more dominant. The regulated
signal of choice is
´.t/ D�
yc.t /
%u.t/
�
;
whose L2-norm k´k22 D kyck2
2 C %2kuk22. In other words, we penalize both the increase of yc and that
of u, where the weight % serves as a tuning parameter, trading off disturbance attenuation and control
effort. which justifies the introduction of the control signal u to regulated output. The measured output is
obviously y D �ym D �yc � n and the control input is u. With these choices, the generalized plant
G.s/ D
2
664
0 1 0 1
1 0 0 0
0 0 0 %
�1 0 �& 0
3
775:
It is readily seen that it satisfies all assumptions A5–8 whenever & > 0 and % > 0.
Another way to interpret the generalized plant is via the relation
�yc
u
�
D�Td �TT �Tc
� �d
n
�
;
from which
´ D�1 0
0 %
� �yc
u
�
D�1 0
0 %
� �Td �TT �Tc
� �1 0
0 &
�
w D�Td �&T%T �%&Tc
�
w D T´ww:
Thus, the closed-loop system T´w includes the disturbance sensitivity, twice the complementary sensitivity
weighted by % and by & , and the control sensitivity weighted by %& .
4.1. STANDARD H2 AND H1 PROBLEMS 73
H2 design
Consider first the H2 performance measure. To use Theorem 4.1, we need the stabilizing solutions to the
Riccati equations (4.5a),
0 �X CX � 0C 1 �X � 1 � %�2 � 1 �X D 0 H) X D %
(the stabilizing solution must be non-negative), and (4.5b),
Y � 0C 0 � Y C 1 � Y � .�1/ � &�2 � .�1/ � Y D 0 H) Y D &:
The resulting gains are then
Fu D �1%
and Ly D 1
&;
for which ACBuFu D �1=% and ACLyCy D �1=& are indeed Hurwitz. Expectably, the state-feedback
gain Fu increases as the control penalty decreases. Likewise, the Kalman filter gain Ly increases as the
measurement noise weight decreases.
The optimal controller
K.s/ D Fl.J2; 0/ D�
�1=% � 1=& �1=&�1=% 0
�
D 1
%&s C %C &
then. It is stable for all % > 0 and & > 0, its static gain is a decreasing function of both % and & , and its
time constant %&=.%C &/ is an increasing function of both these parameters. In other words, the decrease
of either one of these tuning parameter increases both the static gain and the bandwidth of the optimal K.
Consider now the four closed-loop systems (GoF) resulted from this design. We have that
�
Td.s/ S.s/
T .s/ Tc.s/
�
D 1
1C P.s/K.s/
�
1
K.s/
��
P.s/ 1�
D 1
.%s C 1/.&s C 1/
�
%&s C %C &
1
��
1 s�
:
The closed-loop poles are at s D �1=% and s D �1=& , which is consistent with the separation arguments
for observer-based controllers.
The disturbance sensitivity magnitude
jTd.j!/j2 D %2&2!2 C .%C &/2
.%2!2 C 1/.&2!2 C 1/;
which is a monotonically decreasing function of ! with its peak kTdk1 D % C & attained at ! D 0, see
Fig. 4.3(a) for three choices. Clearly, the increase of either % or & increases kTdk1, which is expectable as
both these constants aim at trading off the disturbance response with other closed-loop properties. At the
same time, the high-frequency gain of Td.s/, which equals 1, is independent of tuning parameters.
The control sensitivity magnitude
jTc.j!/j2 D !2
.%2!2 C 1/.&2!2 C 1/:
This function vanishes at both very small and very high frequencies, peaking at ! D 1=p%& with the value
kTck1 D 1=.% C &/ D 1=kTdk1, see Fig. 4.3(b). Thus, the increase of either % or & clearly decreases
kTck1. But the decrease of either one of these parameters does not necessarily increases the peak of the
control sensitivity, to that end both % and & have to be decreased.
The magnitude of the complementary sensitivity, which is the last function that we penalize in T´w ,
jT .j!/j2 D 1
.%2!2 C 1/.&2!2 C 1/:
74 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
10-1
100
101
102
-40
-20
0
6
(a) Disturbance sensitivity
10-1
100
101
102
-40
-20
-6
0
20
(b) Control sensitivity
10-1
100
101
102
-40
-20
-3
0
(c) Complementary sensitivity
Fig. 4.3: Closed-loop frequency-response plots
This is a monotonically decreasing function of !, attaining its maximum kT k1 D 1 at ! D 0, see
Fig. 4.3(c). This value does not depend on tuning parameters, which is because the gain of the plant at this
frequency is infinite. But tuning parameters affect the time constants of this system and thus its bandwidth,
which is
!b D
s
1
2
�r
1
%4C 6
%2&2C 1
&4� 1
%2� 1
&2
�
:
If both % and & decrease, the bandwidth of T .j!/ increases.
As a matter of fact, the sensitivity magnitude,
jS.j!/j2 D .%2&2!2 C .%C &/2/!2
.%2!2 C 1/.&2!2 C 1/;
is contractive, i.e. smaller than or equal to 1, iff ! � 1=p2%& . This implies that as either % or & decreases,
the frequency range in which jS.j!/j � 1 increases. This range of frequencies, in which the feedback
is effective, is a way to define the closed-loop bandwidth. This bandwidth increases faster if % and &
decrease simultaneously.
4.2 H2 design for dead-time systems
This section studies the H2 (LQG) problem for the system in Fig. 4.1(b), for which T´w D Fl.G;K xD�/.
We impose the same assumptions on the generalized plant G as in the delay-free case, i.e. suppose that
A5–8 hold.
4.2.1 Extraction of optimal dead-time controllers
The main idea discussed in this subsection is essentially the same as that in ÷3.5. Namely, the starting point
is the recognition of the fact that no delayed controller can attain a performance level below that attainable
by its delay-free counterpart, 0, just because delay imposes additional constraints on the controller. We
thus start with the parametrization of all sub-optimal causal controllers in Theorem 4.1 and then seek for
conditions to be imposed on its free Q-parameter to render the overall controller of the form K xD� .
Two facts are instrumental towards this end. First, the performance attainable by any controller of the
form K D Fl.J2; Q/ for J2 given by (4.7) is kT´wk22 D 2
0 C kD´uQDywk22. Because 0 given by (4.8)
is fixed, we shall be concerned only with the second term above. Second, the generator of all suboptimal
controllers satisfies the assumptions of Lemma 3.5, so all Q’s rendering Fl.J2; Q/ a dead-time system for
4.2. H2 DESIGN FOR DEAD-TIME SYSTEMS 75
yu
Q� Q�
J2;�
˘
Q
Fig. 4.4: All H2 suboptimal controllers for the standard problem in Fig. 4.1(b)
a given � must be of the form Q D ��
˚
H2;22
C QQ xD� , where
�
H2;11.s/ H2;12.s/
H2;21.s/ H2;22.s/
�
´ J�12 .s/
2
4
A Bu �Ly
Cy Dyu I
�Fu I 0
3
5 ;
for some stable and causal QQ. Thus, the H2 optimization problem reduces to the problem of finding a
stable and causal QQ minimizing kQ0 CQ�xD� k2, where
Q0 ´ D´u��
˚
H2;22
Dyw and Q� ´ D´uQQDyw :
A key observation about the expression above is that the two terms (systems) in the decomposition
Q0 CQ�xD� have non-overlapping impulse responses. This implies that they are orthogonal in H2 (pro-
vided QQ 2 H2, of course). Indeed, let q0.t / and q�.t / be the impulse responses ofQ0 andQ� , respectively.
By definition, q0 has support in Œ0; � �, while q�—in RC. The impulse response of Q�xD� is then q�.t � �/
and it has support in Œ�;1/, see the discussion on p. 4. It then follows from (4.2) that
hQ0; Q�i2 DZ
RC
tr�
Œq�.t � �/�0q0.t /�
dt DZ �
0
tr�
Œq�.t � �/„ ƒ‚ …
D0
�0q0.t /�
dt CZ
RC
tr�
Œq�.t /�0q0.t C �/„ ƒ‚ …
D0
�
dt D 0:
Hence, by Pythagoras and the fact that a shift does not affect the H2-norm, we have that
kQ0 CQ�xD� k2
2 D kQ0k22 C kQ�
xD� k22 D kQ0k2
2 C kQ�k22;
which clearly implies that the norm is minimal under Q� D 0. By A7,8, the matrices D´u and Dyw are
left and right invertible, respectively, so that Q� D 0 ” QQ D 0. This yields the optimal Q D Q0 for
the dead-time system and the following result:
Theorem 4.3. Let A5–8 hold true and D´w D 0. If all stabilizing controllers are presented in the form
shown in Fig. 4.4, where (Fu and Ly below are as in (4.6), again)
J2;� .s/ D
2
4
AC BuFu C eA�LyCye�A� C eA�LyDyuFu �eA�Ly Bu C eA�LyDyu
Fu 0 I
�Cye�A� �DyuFu I �Dyu
3
5 (4.12)
and
˘ D ��
˚
Gyu
D ��
��A Bu
Cy Dyu
��
;
then kT´wk22 D kFl.G;K xD�/k2
2 D 0C DT.�/CkD´uQDywk22, where the delay-free optimal performance
0 is given by (4.8) and
DT.�/ ´Z �
0
kD´uFueAtLyDywk2Fdt: (4.13)
The central controller, corresponding to Q D 0, is the unique optimal controller, attaining 0 C DT.�/.
76 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
Proof. As the structure in Fig. 4.4 is the same as that in Fig. 3.10, it is only remained to show that kQ0k22 D
DT.�/. To this end, note that the impulse response of Q0 is q0.t / D D´uFueAtLyDyw1Œ0;��.t /. The result
then follows by the second equality in (4.1).
Some remarks are in order:
Remark 4.4 (H2 cost of delay). The quantity DT.�/ defined in (4.13) is the increment of 0 due to the
presence of the loop delay in the setup of Fig. 4.1(b). Thus, it can be viewed as the cost of delay from the
H2 perspective. This is obviously a nonnegative quantity. It is also a nondecreasing quantity, just because
Z �Cı
0
kD´uFueAtLyDywk2Fdt D
Z �
0
kD´uFueAtLyDywk2Fdt C
Z ı
0
kD´uFueA.tC�/LyDywk2Fdt;
where the last term in the right-hand side is nonnegative for all ı > 0. There might be situations though, in
which DT.�/ D 0. AsD´u and Dyw are left and right invertible, respectively, this happens iff FueAtLy D0 for all t 2 Œ0; � �. This, in turn, happens iff Fu.sI � A/�1Ly D 0, which follows directly from the
Gramian-based controllability and observability test [44, Thm. 4.1]. Now, it can be verified that
�A Ly
Fu 0
�
D 0 ”�AC BuFu C LyCy C LyDyuFu Ly
Fu 0
�
D Fl.J2.s/; 0/ D 0:
The last equality takes place iff the optimal controller for the delay-free H2 problem is zero. This can
happen, but it is clearly not quite a typical situation. Hence, we may conclude that the cost of delay is
normally nonzero and is a strictly increasing function of � . O
Remark 4.5 (sampled-data vs. delayed feedback). It can be shown [43, ÷V.A] that the optimal attainable
H2 performance under a sampled-data controller with the sampling period h > 0 is 0 C SD.h/, where
SD.h/ ´ 1
h
Z h
0
Z h�t
0
kD´uFueAtLyDywk2Fdtds
(assuming that its A/D and D/A parts are also design parameters). Because
1
h
Z h
0
Z h�t
0
f .t/dtds � 1
h
Z h
0
Z h
0
f .t/dtds D 1
h
Z h
0
Z h
0
f .t/dsdt DZ h
0
f .t/dt
for any bounded f .t/ � 0 (moreover, the equality holds iff f .t/ D 0 for all t 2 Œ0; h�), we have that
SD.h/ � DT.h/;
with the equality iff Fl.J2; 0/ D 0. In other words, the cost of delay normally exceeds the cost of sampling
in the H2 case if the sampling period h coincides with the loop delay � . O
Remark 4.6 (controller structure). The H2 (sub) optimal controllers in Fig. 4.4 are clearly dead-time com-
pensator, with the modifies Smith predictor (MSP) DTC block, exactly like that introduced by Watanabe–
Ito. This provides an additional justification of the DTC controller configuration. O
4.2.2 Loop shifting solution
Another way to treat the H2 optimization for time-delay systems is an extension of the loop-shifting idea
discussed in Section 3.4. Such an extension might not be simple for general delay systems, even though
it applies to a general stabilization problem. The reason is that loop transformations, like those discussed
in ÷3.2.3 and ÷3.3.3, do not necessarily preserve the structure of the cost function, which is the regulated
4.2. H2 DESIGN FOR DEAD-TIME SYSTEMS 77
w´
y u
�
G´w G´u
Gyw Gyu
�
xD�K
(a) Original problem
w´
y Qu
�
G´w G´uxD�
Gyw GyuxD�
�
K
(b) Moving delay to G, xD� Qu D u
w´
Qy Qu
�
G´w G´uxD�
GywQGyu
�
QK
(c) Shifting the loop, Qy D y C˘u
w´
Qy Qu
� xD�OG´w
xD�G´u
GywQGyu
�
QK
�
(d) Shifting the performance channel
w´ Q
Qy Qu
� OG´w G´u
GywQGyu
�
QK
�
xD�
(e) Pulling delay out, xD� Q D ´ ��w
Fig. 4.5: Loop shifting in the general problem setup; here QGyu D ˘ C GyuxD� , QK D K.I C˘K/�1, and
G´w D �C OG´wxD�
signal ´ in Fig. 4.1. Still, in the case of dead-time plants there is a workaround proposed in [46], which is
outlined below.
Consider the setup in Fig. 4.5(a), which is exactly that in Fig. 4.1(b) modulo moving the delay from
the input of the controller to its output (so the controller is supposed to be time invariant). The first step
is to move that delay from the controller to the generalized plant as shown in Fig. 4.5(b). This adds the
delay element xD� to two sub-blocks of G. The new “control input” Qu is then the �-previewed version of u.
Next, the loop-shifting procedure of Section 3.4 with a stable ˘ is applied, resulting in the block-diagram
in Fig. 4.5(c) with QGyu D ˘ C GyuxD� and QK D K.I C ˘K/�1. And, as we already saw, with the
choice ˘ D ��
˚
GyuxD�
the system QGyu is finite dimensional. The problem is that the generalized plant
in Fig. 4.5(c) still contains a delayed sub-block, that from Qu to ´. So some more shifts are required. To
this end, select any � 2 H1 such that G´w D � C OG´wxD� for a finite-dimensional OG´w . In this case,
the system can be equivalently presented in the form depicted in Fig. 4.5(d), in which both systems in
the upper raw are delayed. As a result, the delay element can be pulled out of the system, as shown in
Fig. 4.5(e). The signal Q there is related to the original regulated signal via ´ D xD� Q C�w.
The transformations above decomposes the closed-loop system T´w D Fl.G;K xD�/ as
T´w D �C xD�T Qw ; (4.14)
where T Qw D Fl. QG; QK/ for
QG ´� OG´w G´u
GywQGyu
�
:
By the stability of � and Lemma 3.4, T´w 2 H1 ” T Qw 2 H1, i.e. the transformation preserves
stability. In light of the discussion preceding Theorem 4.3, it would make sense to separate the impulse
responses of the two terms in the right-hand side of (4.14). This is always the case with the choice
� D ��
˚
G´w
, for which the impulse response of � has support in Œ0; � � and thus
kT´wk22 D k�k2
2 C kT QwxD� k2
2 D k�k22 C kT Qwk2
2:
This implies that the problem reduces to the standard finite-dimensional H2 problem for the generalized
78 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
plant QG, which can be solved by Theorem 4.3. In fact, as the “´u” and “yw” parts of G and QG coincide,
assumptions A5–8 hold for G iff they hold for QG.
To see the structure of the equivalent problem, bring in the realizations
OG´w.s/ D�A eA�Bw
C´ 0
�
and QGyu.s/ D�
A Bu
Cye�A� 0
�
(by (3.39) and (3.42), respectively), so that
QG.s/ D
2
4
A eA�Bw Bu
C´ 0 D´w
Cye�A� Dyw 0
3
5 D
2
4
A Bw e�A�Bu
C´eA� 0 D´w
Cy Dyw 0
3
5 : (4.15)
Because the realization of the “´u” part above is the same as that in G, the control ARE (4.5a) and the
corresponding LQR gain Fu for QG are the same as those for G. The realization of the “yw” part of QG is
similar to that of G, with the similarity matrix eA� . As a result, the filtering ARE (4.5b) for QG is solved by
eA�Y eA0� , where Y is the corresponding solution for G, with the Kalman filter gain eA�Ly for Ly defined
in (4.6) for G. Therefore, applying the formulae of Theorem 4.3 to QG we end up with the controllers in
Theorem 4.3.
The optimal cost is for the finite-dimensional problem for QG can be expressed as (cf. Remark 4.1)
Q 0 D tr.B 0weA0�XeA�Bw/C tr.D´uFueA�Y eA0�F 0
uD0´u/
D 0 C tr�
B 0w.e
A0�XeA� �X/Bw
�
C tr�
D´uFu.eA�Y eA0� � Y /F 0
uD0´u
�
;
where 0 is the optimal cost for G. Now, it can be verified that ifW satisfies the equation A0W CWA D V
for some V , then
eA0�W eA� �W DZ �
0
eA0tV eAt dt:
Applying this expression to the AREs in (4.5), we have:
Q 0 D 0 C tr
�
B 0w
Z �
0
eA0t.F 0uD
0´uD´uFu � C 0
´C´/eAt dtBw
�
C tr
�
D´uFu
Z �
0
eAt .LyDywD0ywLy � BwB
0w/e
A0t dtF 0uD
0´u
�
D 0 � tr
�
B 0w
Z �
0
eA0tC 0´C´eAt dtBw
�
C tr
�Z �
0
D´uFueAtLyDywD0ywLyeA0tF 0
uD0´udt
�
D 0 � k�k22 C DT.�/;
where DT.�/ is defined by (4.13) and the expression for k�k22 follows by the second equality of (4.1).
Thus, we end up with kT´wk22 D k�k2
2 C Q 0 D 0 C DT.�/, exactly as in Theorem 4.3.
4.2.3 Design case study
Return to the problem studied in ÷4.1.2. Because A D 0 in this case, we have that eA� D 1 for all � and
the central controller in Theorem 4.3 is the same as that in Theorem 4.1 and does not depend on the loop
delay. Yet there is always a DTC block with the transfer function ˘.s/ D .1 � e��s/=s present, resulting
in the overall controller
K.s/ D s
.%s C 1/.&s C 1/ � e��s;
whose static gain, K.0/ D 1=.% C & C �/, decreases as � grows. The controller in this case is stable for
all � , which can be seen by the fact that the frequency-response gain of 1=..%s C 1/.&s C 1// is strictly
4.2. H2 DESIGN FOR DEAD-TIME SYSTEMS 79
10-1
100
101
102
-40
-20
0
6
9
12
(a) % D 1 and & D 1
10-1
100
101
102
-40
-20
0
6
10
(b) % D 1 and & D 0:05
10-1
100
101
102
-40
-20
1
6
(c) % D 0:05 and & D 0:05
Fig. 4.6: Closed-loop disturbance sensitivity frequency-response plots
contractive for all positive frequencies. We already saw, in ÷3.2.1, that the use of the MSP dead-time
compensator renders the control sensitivity the same as in the delay-free case and the complementary
sensitivity—a delayed version of its delay-free counterpart. Thus, the magnitude plots in Figs. 4.3(b) and
4.3(c) remain unchanged and we can see the effect of the loop delay only via the disturbance sensitivity.
The plots in Fig. 4.6 present jTd.j!/j for various combinations of the weights % and & and for three
loop delays, namely for � 2 f0; 1; 2g. The case of % D 1 and & D 1, which resulted in a cautious design
with the narrowest bandwidth and smallest static gain of the controller, is presented in Fig. 4.6(a). We
can see that the increase of the loop delay increases jTd.j!/j for low frequencies, although this increase
is relatively small. The effect of the loop delay becomes more visible as we decrease the weights. This
is most visible in the plot in Fig. 4.6(c), which corresponds to the “aggressive” choice of % D 0:05 and
& D 0:05. The increase of the delay from zero to 1 increase the sensitivity to step load disturbances by a
factor of 11, i.e. by more than 20 dB. These observations agree with the understanding that the effect of
loop delays is more harmful for systems with higher loop gains and, consequently, higher crossovers.
4.2.4 Extensions to systems with multiple loop delays
It is not hard to imagine a situations where different input and output channels have different delays, cf.
Remark 1.3 on p. 5. We already saw that several stabilization methods studied in Chapter 3 extend to
systems with multiple delays seemlessly, see Remark 3.1 on p. 46 for example. It happens that this is not
quite the case for performance-oriented methods. In this subsection we touch upon involved difficulties
and possible workarounds.
Consider first the apparently simplest nontrivial extension of the single-delay setup in Fig. 4.1(b) to
the multiple delay case. Namely, assume that there are two, possibly vector, control input channels, one
of which is delay free and another one is delayed by � > 0. In other words, we assume that
u D�
u0
u�
�
D�
u0
xD� Nu�
�
D�
I 0
0 xD�
�
Ky µ xDf0;�gKy:
Such a delay element is referred to1 as the input adobe delay element. This situation corresponds to the
setup in Fig. 4.7(a) or, equivalently, in Fig. 4.7(b). In principle, we can now shift the feedback loop in
this system, similarly to what was done in Fig. 4.5(c). This step requires only minor alterations to the
single-delay case:
GyuxDf0;�g D
�
Gyu;0 Gyu;�xD�
�
D
QGyu‚ …„ ƒ�
Gyu;0QGyu;�
�
C˘
‚ …„ ƒ�
0 ��
˚
Gyu;�xD�
�
1The term was coined in [37], where it was used as a building block (an adobe brick) in solving more general cases.
80 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
w´
y u
�
G´w G´u
Gyw Gyu
�
xDf0;�gK
(a) Original problem
w´
y Nu
�
G´w G´uxDf0;�g
Gyw GyuxDf0;�g
�
K
(b) Moving delay to G, xDf0;�g Nu D u
w´
y Nu
�
G´w ´QG´u˘ �1
u
Gyw GyuxDf0;�g
�
K
(c) Swapping adobe delay with G´u
w´
y
Nu
Qu
�
� C ´OG´w ´
QG´u
Gyw GyuxDf0;�g˘u
�
˘ �1uK
(d) Pre-shifting the loop and decomposing G´w
w´ Q
Qy Qu
� OG´wQG´u
GywQGyu
�
QK
�
´
(e) Final setup, Qy D y C˘y Qu and ´ Q D ´ ��w
Fig. 4.7: H2 loop shifting for input adobe delay, with QK D ˘�1u K.I C˘yK/
�1
for the respective partitioning of Gyu. But this appears to be a dead end in general. The main obstacle is
that the adobe delay element does not commute with G´u, unless this system is block-diagonal. It is thus
not clear how to pool the delay out of the generalized plant via the “´” channel, as done in the single-delay
case in Fig. 4.5(e).
To gain insight into this issue, let us simplify the problem even more. Assume for the time being that
G´u D�
G´u;0 G´u;0�
0 G´u;�
�
; (4.16)
for a square and invertible G´u;0, where the input partition is compatible with that in xDf0;�g. Consider the
signal
´u ´ G´uu D�G´u;0 G´u;0�
xD�
0 G´u;�xD�
� �u0
Nu�
�
D�G´u;0.u0 CG�1
´u;0G´u;0�xD� Nu�/
xD�G´u;� Nu�
�
and split
G�1´u;0G´u;0�
xD� D QGa � ��
˚
G�1´u;0G´u;0�
xD�
µ QGa �˘u;0� (4.17)
as in (3.43a). Denoting QG´u;0� ´ G´u;0QGa, which is finite dimensional, we end up with the relation
´u D�
I 0
0 xD�
� �
G´u;0QG´u;0�
0 G´u;�
� �
I �˘u;0�
0 I
� �
u0
Nu�
�
µ ´QG´u˘
�1u Nu: (4.18)
The system ´ is also an adobe delay element in this case, just with potentially different dimensions of
the delayed block in it (G´u;� is not necessarily square). This yields the system in Fig. 4.7(c).
The rest is quite technical, albeit straightforward. Because˘u is bi-stable, we can use it as a multiplier
and shift to the controller side as shown in Fig. 4.7(d). Partitioning the output of G´w according the the
partition of ´ and using the decomposition in (3.40a), we can write
�
G´w;0
G´w;�
�
D�
G´w;0
��
˚
G´w;�
C xD�OG´w;�
�
D�
0
��
˚
G´w;�
�
C�
I 0
0 xD�
� �
G´w;0
OG´w;�
�
µ �C ´OG´w ;
see the generalized plant in Fig. 4.7(d). The impulse response of this � still does not overlap that of ´T
for every causal T , so it can be handled by already familiar approaches.
4.2. H2 DESIGN FOR DEAD-TIME SYSTEMS 81
QK˘u;0�
˘y;�
xD�
yQy
u� Nu�
u0
Fig. 4.8: Feedforward action Smith predictor (FASP)
It is thus only left to eliminate the infinite-dimensional part in the “Gyu” part of the generalized plant.
To this end, note that
GyuxDf0;�g˘
�1u D
�
Gyu;0 Gyu;�xD� �Gyu;0˘u;0�
�
D�
Gyu;0NGyu;0�
xD� � Gyu;0QGa
�
;
where (4.17) is used to decompose ˘u;0� and NGyu;0� ´ Gyu;� C Gyu;0G�1´u;0G´u;0� is finite dimensional.
DecomposingNGyu;0�
xD� D QGyu;0� � ��
˚ NGyu;0�xD�
µ QGyu;0� �˘y;� ;
we end up with
GyuxDf0;�g˘
�1u D
�
Gyu;0QGyu;0� �Gyu;0
QGa
�
��
0 ˘y;�
�
µ QGyu �˘y (4.19)
for a finite-dimensional QGyu, which is ready for applying the loop shifting from ÷3.4.1. Thus, the final
setup is as shown in Fig. 4.7(e) with
QK ´ ˘�1u K.I C˘y˘
�1u K/�1 D ˘�1
u K.I C˘yK/�1 (4.20)
(because ˘y˘�1u D ˘y). Taking into account the orthogonality of � and xD�T Qw , the H2 problem again
reduces to a finite-dimensional H2 standard problem, which can be solved by Theorem 4.1.
Now return to the structure of G´u assumed in (4.16). The structure itself is quite special and should
not be expected to hold true in many applications. Nonetheless, the ideas presented above do extend to a
general G´u under the mild condition that the feedthrough term G´u.1/ has full column rank. The most
technically nontrivial part of the general solution is the factorization
G´uxDf0;�g D ´
QG´u˘�1u (4.21)
for a finite-dimensional QG´u, inner (i.e. stable and energy preserving) ´, and a bi-stable ˘u. In fact, this
factorization results in QG´u having the same “A” matrix asG´u and in block upper triangular ˘u with static
invertible diagonal elements. Moreover, it is then always possible to decompose G´w D �C´OG´w with
orthogonal � and ´T Qw for all T Qw 2 H2 and a finite-dimensional OG´w having the same “A” matrix as
G´w . Details, which are rather technical, can be found in [45, ÷III-B]. The input adobe delay result extends
then to the output adobe delay case, where there are delay-free and delayed measurements channels, and
eventually to systems with general multiple input and output delays by running a recursion of adobe delay
problems. In all these cases delayed H2 problems reduce to equivalent delay-free H2 problems. Details
go beyond the scope of the notes though.
Remark 4.7 (FASP architecture). The controller structure induced by (4.20) is of a special interest. The
mappingK 7! QK defined by that relation is bijective, withK D ˘uQK.I�˘y
QK/�1 for a finite-dimensional
“primary controller” QK designed for a finite-dimensional H2 problem. Taking into account the structure
of the “˘” elements from (4.18) and (4.19), which is preserved in the case of a general G´u, the controller
above corresponds to the block-diagram in Fig. 4.8. It has a “conventional” DTC FIR element ˘y;� as
its internal feedback, although only from the delayed control channel. On top of that, there is also the
82 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
“interchannel” DTC FIR element ˘u;0� , which, in a sense, compensates for the delay in the performance
channel. This unorthodox structure is known as the FASP (feedforward-action Smith predictor).
Curiously, nothing similar to FASP has appeared in the classical literature, where DTC architectures
are typically ad hoc. In fact, the question of a proper extension of the DTC architecture to systems
with multiple delays was for long considered an open problem, because stabilization-induced structures
produced rather poor feedback performance, see the discussion in the beginning of [23]. It was even
suggested by Jerome and Ray in [23] that in many situations artificial delays should be added to channels
with smaller delays to equalize loop delays, which is conceptually flawed. Only the advent of transparent
solutions to optimal control problems revealed the FASP architecture. This was done for the first time in
[37], as a technical by-product of the H1 optimization procedure for systems with multiple loop delays.
The arguments above are from [45] and are somewhat more transparent. O
It should be emphasized that the discrepancy between stability- and performance-induced DTC ar-
chitectures discussed above shows up only in problems with multiple i/o delays. It is not unreasonable
to expect that a similar discrepancy exists in more general multiple delay systems, like that discussed in
Remark 3.1 on p. 46. But studies of such architectures are yet to be carried out.
4.3 H1 design for dead-time systems
Let us turn now to the H1 problem for the system in Fig. 4.1(b). We again impose the same delay-free
assumptions on the generalized plant G, i.e. suppose that A5–8 hold. To simplify the formulae, we also
assume throughout this section that D´w D 0, so that the formulae of Remark 4.3 on p. 71 can be used.
4.3.1 Extraction of -suboptimal dead-time controllers
The logic here follows that in ÷4.2.1, just this time we use Theorem 4.2 as the starting point. The generator
of all suboptimal controllers in Theorem 4.2 satisfies the assumptions of Lemma 3.5 too, so all Q’s ren-
dering Fl.J1; Q/ a dead-time system for a given � must be of the form Q D ��
˚
H1;22
C QQ xD� , where
H1 ´ J�11 with
�
H1;11.s/ H1;12.s/
H1;21.s/ H1;22.s/
�
D
2
4
AC BwFw � �2Z Y.XBu C C 0´D´u/Fu Z .Bu C L´D´u/ �Z Ly
Cy CDywFw Dyu I
�Fu I 0
3
5 ;
for some stable and causal QQ, i.e. such that its transfer function QQ 2 H1. Thus, the H1 optimization
problem reduces to the problem of finding whether a QQ 2 H1 exists such hat kQ0 CQ�xD� k1 < , where
Q0 ´ D´u��
˚
H1;22
Dyw and Q� ´ D´uQQDyw :
Although this problem looks similar to the corresponding problem in the H2 case, the elegant H2
projection reasonings do not apply here. H1 is not a Hilbert space, so there is no orthogonality notion
on it. As such, the separation of the norms of Q0 and Q� exploited in the H2 case does no longer hold.
In fact, zeroing QQ, similarly to the optimal strategy for the H2 case, is not the right course of action in
general. This is illustrated by the following simple example:
Example 4.1. Let
Q0.s/ D ��
�1
s
�
D 1 � e��s
s
and consider the problem of minimizing kQ0 CQ�xD� k1 by a causal Q� . First, in an attempt to imitate
the H2 solution, choose Q� D 0. With this choice the attained norm is
kQ0k1 D sup!2R
j1 � e�j�! j!
D j1 � e�j�! j!
ˇˇˇˇ!D0
D �:
4.3. H1 DESIGN FOR DEAD-TIME SYSTEMS 83
Choose now
Q� .s/ D 1
s� .2�/2s2 C �2
�s.2�s C �e��s/
(an educated guess). This is an H1 function. Indeed, it is clearly proper, so bounded on C˛ for a
sufficiently large ˛. Its singularities are at the origin and roots of the quasi-polynomial p.s/ D 2�sC�e��s.
The singularity at s D 0 is removable, because
lims!0
Q� .s/ D lims!0
�.2�s C �e��s/ � .2�/2s2 C �2
�s.2�s C �e��s/D �� � 2
��:
Roots of p.s/ can be analyzed via the Nyquist criterion for �e��s=.2�s/. Its crossover frequency is
!c D �=.2�/ and its phase at this frequency is
arg�
j2�!ce�j�!c D ��
2� �!c D ��:
This implies that the quasi-polynomial p.s/ has all its roots in the left-half plane C n xC0 except the pair at
˙j!c. But at those roots we have that
lims!˙j!c
Q� .s/ D lims!˙j!c
�.2�s C �e��s/ � .2�/2s2 C �2
�s.2�s C �e��s/D �16˙ j 2.� � 2/2
�.�2 C 4/�;
i.e. these singularities are also removable.
Now,
Q0.s/CQ�.s/e��s D 2�
�
� � 2�se��s
2�s C �e��sD 2�
�e��s p.�s/
p.s/:
We already know that this is an H1 function, so
kQ0 CQ�xD� k1 D sup
!2R
2�
�je�j�! j jp.�j!/j
jp.j!/j D 2
�� < � D kQ0k1:
Thus, it is possible to improve the kQ0k1 bound by at least a factor of �=2 � 1:57 in this case by an
appropriate choice of Q� , even though the impulse responses of Q0 and Q�xD� do not overlap. O
The problem of finding functions Q� 2 H1 such that kQ0 C Q�xD� k1 < is known as the Nehari
extension problem, named after Zeev Nehari. It is known that this problem is solvable iff the Hankel
norm of Q00
xD� is smaller than , where Q00 is the adjoint operator, on L2.R/, of Q0, i.e. the system whose
impulse response is Œq0.�t /�0, and the Hankel norm is the induced norm as an operator L2.R�/ ! L2.RC/.The following result from [41], presented here without proof, gives a verifiable solvability condition
for the required Hankel norm to be smaller than and parametrizes all solutions to the problem.
Lemma 4.4 (Nehari problem for delayed systems). Let
G0.s/ D�A0 B0
C0 0
�
:
There is Q� 2 H1 such that k��
˚
G0
CQ�xD� k1 < iff det˙22.t / ¤ 0 for all t 2 Œ0; � �, where
�
˙0;11.t / ˙0;12.t /
˙0;21.t / ˙0;22.t /
�
´ exp
��
A0 B0B00
� �2C 00C0 �A0
0
�
t
�
:
If this condition holds, then all solutions of the problem can be presented in the form
��
˚
G0
CQ�xD� D
�
˚12 C .˚11 � xD�I /QNEP
�
.I C ˚22 C ˚21QNEP/�1;
84 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
where
�˚11.s/ ˚12.s/
˚21.s/ ˚22.s/
�
D ��
8
ˆ<
ˆ:
2
664
A0 B0B00 0 B0
� �2C 00C0 �A0
0 � �2˙�10;22.�/C
00 �˙�1
0;22.�/˙0;21.�/B0
C0 0 0 0
0 B 00 0 0
3
775
9
>>=
>>;
and QNEP 2 H1 is such that kQNEPk1 < but otherwise arbitrary.
Example 4.2. Return to the problem studies in Example 4.1, in which G0.s/ D 1=s. For this system
�
˙0;11.t / ˙0;12.t /
˙0;21.t / ˙0;22.t /
�
´ exp
��
0 1
� �2 0
�
t
�
D�
cos.t= / sin.t= /
� �1 sin.t= / cos.t= /
�
;
so that the problem is solvable iff > 2�=� . This is exactly what the choice ofQ� in Example 4.1 attains.
The “central” Q� , the one obtained by setting QNEP D 0, attains
Q.s/ D cos.�= /s � �1 sin.�= / � .cos.2�= /s � �1 sin.2�= //e��s
.cos.�= /s � �1 sin.�= //s C . �1 sin.2�= /s C cos.2�= //e��s
!2�=������! 2�
�
� � 2�se��s
2�s C �e��s;
which leads to
Q�.s/ D�2�
�
� � 2�se��s
2�s C �e��s� 1
sC e��s
s
�
e�s D 1
s� .2�/2s2 C �2
�s.2�s C �e��s/
exactly as that in Example 4.1. Thus, the “educated guess” there was actually the optimal solution. O
The result of Lemma 4.4 is instrumental in the solution to the standard H1 problem for the system in
Fig. 4.1(b). To formulate this solution, define the matrix functions
˙.t/ ´ exp
��
A BwB0w
� �2C 0´C´ �A0
�
t
�
:
and the matrices
B� ´�
Y I�
˙ 0.�/
� �2C 0
´D´u
Bu
�
and C� ´�
�DywB0w Cy
�
˙.�/0�
� �2X
I
�
:
The result below if the counterpart of Theorem 4.2 for dead-time systems:
Theorem 4.5. If A5–8 hold true and D´w D 0, then the standard H1 problem is solvable iff the following
conditions hold:
(a) there is a stabilizing solution X to ARE (4.9a) such that X D X 0 � 0,
(b) there is a stabilizing solution Y to ARE (4.9b) such that Y D Y 0 � 0,
(c) Z .t / is well defined for all t 2 Œ0; � �, where
Z .t / ´�
�
Y I�
˙ 0.t /
�
� �2X
I
���1
In such a case, all -suboptimal controllers are given as the system in Fig. 4.9, where
J1;� .s/ D
2
4
AC BwFw C BuFu CZ .�/LyC� �Z .�/Ly Z .�/B�
Fu 0 I
�C� I 0
3
5 (4.22)
4.3. H1 DESIGN FOR DEAD-TIME SYSTEMS 85
yu
Q� Q�
J1;�
˘1
Q
Fig. 4.9: All H1 suboptimal controllers for the standard problem in Fig. 4.1(b)
and
˘1.s/ D ��
˚
Fu.G.s/; �2ŒG´w.�s/�0/e��s
D ��
8
<
:
2
4
A BwB0w Bu
� �2C 0´C´ �A0 � �2C 0
´D´u
Cy DywB0w Dyu
3
5e��s
9
=
;(4.23)
for any Q 2 H1 such that kD´uQDywk1 < .
Proof. Follows by plugging the conditions and the parametrization of Lemma 4.4 to the parametrization
of Theorem 4.2. Technical details are quite involved and can be found in [41].
Some remarks are in order:
Remark 4.8 (H1 cost of delay). It is readily seen that Z .0/ in Theorem 4.5 equals Z D .I � �2YX/�1
from Theorem 4.2. This implies that the solvability conditions for the dead-time H1 problem cover those
of its delay-free counterpart, which is expectable. The addition is expressed in Theorem 4.5 in terms of the
matrix function Z� .t /. Equivalently, it can be shown that the delay problem is solvable iff the delay-free
problem is solvable and, in addition, the solution to the differential Riccati equation
PPX .t / D PX .t /AC A0PX .t /C C 0´C´ C �2PX .t /BwB
0wPX .t / D 0; PX .0/ D X
exists 8t 2 Œ0; � � and is such that �.YPX .�// < 2. Yet another version is that the solution to the differen-
tial Riccati equation
PPY .t / D PY .t /AC A0PY .t /C BwB0w C �2PY .t /C
0´C´PY .t / D 0; PY .0/ D Y
exists 8t 2 Œ0; � � and is such that �.PY .0/X/ < 2. Moreover, both PX .t / and PY .t / are non-decreasing
functions of t , in the sense that PPX .t / � 0 and PPY .t / � 0 for all t � 0. Normally, the performance level
attainable in the delay-free case is attainable for no � > 0. But there might be situations, and not only
those in which the delay-free suboptimal controller K D 0, in which the addition of a finite delay does not
harm the attainable performance. O
Remark 4.9 (sampled-data vs. delayed feedback). An interesting observation is that the solvability con-
ditions for the dead-time H1 problem discussed in Remark 4.8 coincide with the solvability conditions
for the sampled-data H1 problem for the same generalized plant, see [43, ÷V.B]. In other words, the cost
of delay equals the cost of sampling in the H1 case if the sampling period coincides with the loop de-
lay, which is different from the H2 case. This fact is somewhat surprising, because causality constraints
imposed on the controller by sampling are less restrictive than those imposed by the delay. Nonetheless,
the disturbance is capable to “outsmart” the controller, which acts in open loop during each intersample
interval, in this situation. O
86 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
Remark 4.10 (controller structure). TheH1 suboptimal controllers in Fig. 4.9 are dead-time compensator,
again. Yet the DTC block ˘1 given by (4.23) is quite different from the modifies Smith predictor (unless
G´w D 0 or D 1). It is intriguing to understand the rationale behind this structure. To this end, write
Fu.G; �2G0
´w/ D Gyu CGyw. 2I �G0
´wG´w/�1G0
´wG´u;
where G0´w denotes the adjoint operator over L2.R/. Thus, the relation y D Fu.G;
�2G0´w/u can be
presented in the form
y D Gyww? CGyuu;
where the exogenous input is generated as
w? D . 2I �G0´wG´w/
�1G0´wG´uu ” w? D �2G0
´w.G´ww? CG´uu/ D �2G0´w´:
It can be shown that this signal is the worst-case disturbance in the open-loop setting, viz. the signal that
maximizes k´k22 � 2kwk2
2 for any given u. Thus, the H1 DTC element compensates the loop delay
under the worst-case open-loop scenario. The rationale behind this is well founded. Indeed, the whole
H1 design methodology is effectively a game where u plays against w, trying to minimize k´k22� 2kwk2
2.
But the effect of the control signal on ´ delays with respect to that of w, so in the “prediction” part we can
only act in open loop. The DTC form of the H1 controller was first recognized by Meinsma and Zwart in
[38] for the mixed-sensitivity problem and then derived, and interpreted, for the general case in [41]. O
4.3.2 Loop shifting approach
The loop-shifting procedure of Fig. 4.5 is not quite helpful in the H1 case, just because systems with
non-overlapping impulse responses can still alter the effect of each other, cf. Example 4.1. But there is
one special case where the procedure can still be used in theH1 context. This is the case where G´w D 0,
for which we can take � D 0 and end up with the relation Fl.G;K xD�/ D Fl. QG; QK/ xD� . Because the delay
does not affect the magnitude of the frequency response, we have that
kFl.G;K xD�/k1 D kFl. QG; QK/k1;
which implies that the problem reduces to a finite-dimensional H1 problem for QG given by (4.15) and the
DTC element is the standard MSP, like that in the H2 case.
H1 problems with G´w D 0 are actually important. Several robust stability problems fall into this
category. For example, consider a plant P , described as P D P0.I CW2�IMW1/ for known nominal plant
P0 and weightsW1; W2 2 RH1 and an uncertain element�IM, of which we only know that�IM 2 H1 and
k�IMk1 � ˛ for some ˛ > 0 known as the uncertainty radius. This is the so-called input multiplicative
modeling uncertainty model. It follows from small-gain arguments that a controller K robustly stabilizes
this plant, i.e. stabilizes all P of this form, iff kFl.G;K/k1 < 1=˛, where the generalized plant
G D�
0 W1
P0W2 P0
�
has a zero “G´w” part. Likewise, the generalized plants for the output multiplicative uncertainty model
P D .I CW2�OMW1/P0 and for the additive uncertainty model P D P0 CW2�ADW1 are
G D�
0 W1P0
W2 P0
�
and G D�
0 W1
W2 P0
�
;
respectively, and both have zero “G´w” parts.
4.4. TUNING INDUSTRIAL CONTROLLERS 87
P.s/ e��s QC .s/
˘.s/
rQeu
d
y
n Qy
-
Fig. 4.10: Unity-feedback setup with dead-time compensator
A consequence of this is that the MSP is actually a part of the optimal robust controller, i.e. the
controller stabilizing P under the maximum uncertainty radius. This might sound surprising, taking into
account that the dead-time compensation procedure is intrinsically open loop. Moreover, if P0 is stable,
then we can always choose ˘ D P0.1 � xD� / in the loop-shifting procedure in Fig. 4.5, resulting in the
classical Smith predictor as the optimal controller. With this choice, QGyu D Gyu D P0 and we conclude
that the Smith controller has the very same robustness level for a dead-time plant as its primary controller
has for the delay-free version of the plant, with respect to either multiplicative or additive uncertainty (in
fact, even with respect to some classes of structural uncertainty). This conclusion is not that obvious.
4.4 Tuning industrial controllers
This chapter is wound up with a brief discussion of the use of the DTC controller architecture to control
simple industrial processes. A representative setup is presented in Fig. 4.10, where P is a low-dimensional
SISO plant, ˘ is a DTC element, and QC is a primary controller designed in a delay-free setting.
A typical design procedure for such systems is to fix the DTC element (often, as the classical Smith
predictor) and then to design a fixed-structure (often, PI) primary controller for a delay-free equivalent
plant. This is fundamentally different from the optimization-based approaches discussed above, where all
parts of the controller are designed together. Arguably, the most important requirement to this separated
design procedure is its transparency, namely the ability of the resulted closed-loop system to inherit prop-
erties of the delay-free loop, for which QC is designed. The choice of ˘ is a key for a successful design
towards this end. So in the discussion below the main emphasis is placed on the effect of the DTC element
on the design transparency. Also, because these kinds of methods are seldom analytical, the ideas are
introduces via simple examples.
One day it should be written . . .
88 CHAPTER 4. PERFORMANCE OF TIME-DELAY SYSTEMS
Chapter 5
Implementation of DTC-based Controllers
WE SAW IN THE PREVIOUS TWO CHAPTERS that dead-time compensation (DTC) architectures are
intrinsic to stabilization and optimal control problems for time-delay systems. In the heart of these
architectures are DTC elements, which are normally infinite-dimensional FIR systems of the form
˘.s/ D ��
��A B
C D
�
e��s
�
D�A e�A�B
C 0
�
��A B
C D
�
e��s D�A e�A�B �BC 0 �D
� �
I
e��sI
�
(5.1a)
D C
Z �
0
e�.sI�A/tdt e�A�B �De��s (5.1b)
(cf. (3.43) on p. 61). Such elements are only parts of overall controllers, like those presented in Fig. 3.10
on p. 62 or Fig. 4.8 on p. 81. However, their delayed parts do not blend easily with finite-dimensional
primary controllers and their finite-dimensional parts, like C.sI �A/�1e�A�B above, cannot be separated
from ˘ for DTC elements must be stable themselves. For these reasons, DTC elements are implemented
separately from primary controllers, as FIR systems. This chapter discusses related implementation issues.
5.1 General observations
The last equality in (5.1a) suggests that ˘ W u 7! y˘ can be implemented as the following dynamics:
(
Px˘.t / D Ax˘.t /C e�A�Bu.t/ � Bu.t � �/y˘.t / D Cx˘.t /CDu.t � �/
(5.2)
The only infinite-dimensional element required for this implementation is a buffer to realize the delay line
for having access to the past u, which is a relatively simple element. However, this implementation does
not always work, as can be seen in the example below.
Example 5.1. Consider the system presented in Fig. 5.1, which is an implementation of controller (3.22)
1
s � 1e��s
2e�
e�� �e��s
s�1
ru
d
y
n Qy
-
Fig. 5.1: Benchmark system for DTC implementation
89
90 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
0 1 5 10 15
0
1
Fig. 5.2: DTC element in Fig. 5.1 implemented via (5.2), solid line
on p. 48 for the plant with A D 1 and B D 1 and for F D �2. It should result in the closed-loop system
Tyr W r 7! y with the transfer function
Tyr.s/ D e��s
s C 1;
which is readily verifiable. The expected response of the closed-loop system to the unit-step r.t/ D 1.t /
is y.t/ D .1 � e��t /1.t � �/, which is presented in Fig. 5.2 by the red line. Yet implementing the DTC
part of the controller according to (5.2) leads to an unstable closed-loop response, as shown by the cyan-
blue line in Fig. 5.2. The simulations producing this are carried out in Simulink with the variable-step
ode45 solver under a maximal step size of 0:0001, which should be small enough for this closed-loop
time constant (and the results do not improve under smaller steps). O
The results of Example 5.1 can be understood by noticing that the representation of ˘ via (5.1a)
actually cancels all modes of A. If A is not Hurwitz, as in Example 5.1, these cancellations are unstable.
As a result, inaccuracies in performing those cancellations, say due to roundoff errors, might be disastrous.
Indeed, the output of ˘ in this case is not y˘ generated by (5.2), but rather Qy˘ generated by
(
Px˘.t / D Ax˘.t /C e�A�Bu.t/ � Bu.t � �/C �roff.t /
Qy˘.t / D Cx˘ .t /CDu.t � �/(5.3)
where �roff represents roundoff errors in solving the differential equation for x˘ (accounting for errors in
the second equation is not quite important here). Assuming zero initial conditions, .x˘.0/; Mu0/ D 0,
Qy˘.t / D C
Z t
0
eA.t�s/�
e�A�Bu.s/ � Bu.s � �/C �roff.s/�
ds CDu.t � �/
D�
C
Z t
t��
eA.t�s��/Bu.s/ds CDu.t � �/�
C C
Z t
0
eA.t�s/�roff.s/ds µ y˘.t /C � .t / (5.4)
for all t � 0. The error signal � is the response of a system with the transfer function C.sI � A/�1 to
�roff. If the matrix A is not Hurwitz, which is what we have in Example 5.1, this response diverges and we
end up with the unstable behavior like that in Fig. 5.2.
5.2 Implementation via reset mechanism
An elegant workaround for handling unstable cancellations in (5.2) was proposed by Tam and Moore in
[70] in the context of fixed-lag smoothing, which also involves an analog FIR part. The idea is to reset
dynamics (5.2) periodically to avoid the error signal � in (5.4) to grow up unbounded. A mechanical
reset (5.2) would obviously result in a different response from that of ˘ . But because ˘ is FIR, i.e. has a
finite memory, such deviations would last only for � time units after the reset. This fact can be exploited
5.2. IMPLEMENTATION VIA RESET MECHANISM 91
by introducing two clones of ˘ , which run in parallel and reset at odd and even multiples of the delay � ,
respectively. In this scheme, while the first system generates the signal y˘ , the second one accumulates
right initial conditions. After � time units the second system is ready to generate the correct y˘ , so at this
point the first system is reset and the roles are interchanged.
To be specific, consider (5.3) and assume that its state .x˘.t /; Mut/ is reset to zero at t D tc. In this case
x˘.t / DZ t
tc
eA.t�s/�
e�A�Bu.s/ � Bu.s � �/1.s � tc � �/C �roff.s/�
ds
DZ t
tc
eA.t�s��/Bu.s/ds �Z t
minftcC�;tgeA.t�s/Bu.s � �/ds C
Z t
tc
eA.t�s/�roff.s/ds
DZ t
tc
eA.t�s��/Bu.s/ds �Z t��
minftc;t��geA.t�s��/Bu.s/ds C
Z t
tc
eA.t�s/�roff.s/ds
DZ t
t��
eA.t�s��/Bu.s/ds �Z tc
minftc;t��geA.t�s��/Bu.s/ds C
Z t
tc
eA.t�s/�roff.s/ds
for all t � tc. Thus, Qy˘.t / D y˘.t /C yı.t /C � .t /, where the error caused by (potentially wrong) zero
initial conditions,
yı.t / ´ �CZ tc
minftc;t��geA.t�s��/Bu.s/ds;
vanishes after t D tc C � , when the system forgets all wrong initial conditions. The term � due to round-
off errors still tends to diverge under non-Hurwitz A, but if the system resets every T time units, then it
remains uniformly bounded. In fact, in this case [11, Sec. II.6]
k� k1 �Z T
0
kC eAt k1dt k�roffk1; (5.5)
where the 1-norm of a matrix / vector M is kMk1 ´ maxi
P
j jMij j and the L1.RC/-norm of a signal
� is k�k1 ´ supt2RCk�.t/k1.
Thus, the implementation of ˘ W u 7! y˘ by the time-varying dynamics
8
ˆ<
:
Px1.t / D Ax1.t /C e�A�Bu.t/ � Bu.t � �/; x1..2k � 1/�/ D 0
Px2.t / D Ax2.t /C e�A�Bu.t/ � Bu.t � �/; x2.2k�/ D 0
y˘.t / D �tCx1.t /C .1� �t /Cx2.t /CDu.t � �/(5.6a)
where for all k 2 ZC the switching function
�t D(
1 if t 2 Œ2k�; .2k C 1/�/
0 if t 2 Œ.2k C 1/�; .2k C 2/�/; (5.6b)
results in a stable system, in the sense that the error � caused by roundoff errors is uniformly bounded
by (5.5), with T D 2� (provided the roundoff errors are the same for each clone). The implementa-
tion of equations (5.6) is straightforward, with relatively low computational expenses and standard tools
applicable.
At the same time, there is not much we can do to improve the accuracy of the method. It is always
a matter of how large 2� vis-a-vis unstable modes of A. In principle, it is possible to implement more
than two clones of (5.2) in parallel, with a round-robin scheduling. But even in this case the run time of
each clone between resets is lower-bounded by � , with the time required to accumulate the right initial
condition. Thus, if there are � 2 N n f1g clones, the horizon in the error bound in (5.5) decreases only to
T D ��=.� � 1/ > � , which might still be insufficient if the delay is too large.
92 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
0 1 3 5 7 9
-1
0
1
(a) Response of the first clone
0 2 4 6 8 10
-1
0
1
(b) Response of the second clone
0 1 2 3 4 5 6 7 8 9 10
-1
0
1
(c) Combined response
Fig. 5.3: Response of ˘ implemented by (5.6) to a square wave of period 2� and amplitude e=.e � 1/
Example 5.2. Return to the problem studied in Example 5.1 and consider its DTC element, whose transfer
function
˘.s/ D e�� � e��s
s � 1and the static gain ˘.0/ D 1 � e�� . Fig. 5.3 presents the response of this ˘ to a square wave with the
frequency 2� [sec] and the amplitude 1=˘.0/ D e�=.e� �1/ under � D 1. The responses of each one of two
clones to the same input are depicted in Figs. 5.3(a) and 5.3(b). The dashed lines there represent intervals,
where proper initial conditions are accumulated after each reset and solid lines represent intervals in which
the corresponding output is used as the actual y˘ . They then combine into the solid line in Fig. 5.3(c),
which is bounded and virtually coincides in this case with the expected response. The response of the
closed-loop system in Fig. 5.1 is then indistinguishable from the ideal response y.t/ D .1� e��t /1.t � �/,which is shown by the red line in Fig. 5.2. O
5.3 Rational approximations
Another direction in implementing ˘ given by (5.1) is to approximate it by a finite-dimensional system.
This would put the DTC element on an equal footing with the primary controller, so that the overall
controller can be implemented as one block, by standard tools. Several approaches to finite-dimensional
approximations are outlined below.
5.3.1 Naıve Pade
The “laziest” approach is just to replace the delay element in (5.1a) with its Œn; n�-Pade approximant
studied in ÷1.3.1. This results in
˘naıveP;n.s/ D�A e�A�B
C 0
�
��A B
C D
�
Rn;n.s/
and requires only standard technical and numerical tools. But this approach does not work in the case
when A is not Hurwitz, at least it does not work off the shelf. Indeed, nothing guarantees that unstable
modes of A are canceled in the expression above. In fact, they generally are not. For example, for the
DTC element in Fig. 5.1 we have that
˘naıveP;n.s/ D e�1 � Rn;n.s/
s � 1 D
„�0:632121.s�1:00148/.s�11:9822/
.s�1/.s2C6sC12/if n D 2
1:36788.s�0:99998969/.s2�4:54542sC55:4546/
.s�1/.sC4:64437/.s2C7:35563sC25:8377/if n D 3
�0:632121.s�1:00000004/.s�39:8821/.s2�2:39698sC42:1242/
.s�1/.s2C8:41516sC45:9512/.s2C11:5848sC36:5605///if n D 4
(5.7)
5.3. RATIONAL APPROXIMATIONS 93
100
101
-60
-40
-20
-10.2
-8.2
(a) n D 2
100
101
-60
-40
-20
-12.4
-10.3
(b) n D 3
100
101
-60
-40
-20
-14.1
-12
(c) n D 4
Fig. 5.4: Normalized approximation errors for rational approximations of ˘.s/ D .e�1 � e�s/=.s � 1/
are all unstable, which is not what we need. Still, as can be seen from the formulae above, their unstable
pole at s D 1 is quite close to a zero and can thus be canceled without impairing the frequency response
of the resulted approximation much. Thus, the approach might be a handy tool for a quick approximation
if accompanied by a manual cancellation of all unstable poles of the result. This works well for the
approximation in (5.7), whose approximation errors are presented in Fig. 5.4 by cyan-blue lines for the
approximant degrees n D 2; 3; 4. In canceling close pole and zero there, the static gain of the approximant
was kept equal to ˘naıveP;n.0/ D ˘.0/. However, this approach is simple only in the SISO case and might
fail if the resulting transfer functions have relatively distant poles and zeros in xC0.
5.3.2 Pade with interpolation constraints
The problem above can be fixed by adding extra interpolation constraints to the Pade approximant of e��s
so that
Rn;n.�i/ D e���i (5.8)
at all eigenvalues of A in xC0 nf0g (eigenvalues at the origin are already handled by the Pade approximant).
If the algebraic multiplicity of an eigenvalue is larger than 1, then additional constraints on derivatives of
the approximant at that point must be added. This ensures that all such eigenvalues are canceled in the
approximant. Such approximants are special cases of so-called multipoint Pade approximants, where an
approximant is designed to match a function and its derivatives at several points.
Adding constraints like (5.8) to the procedure described in ÷1.3.1 is a matter of replacing the last rows
of (1.33) on p. 14 with appropriate linear constraints. Taking into account that (1.33) approximates es ,
each constraint (5.8) should be recast as Rn;n.��i�/ D e��i =� , which results in the linear constraints
�
1 ��i� � � � .�1/n.�i�/n �e�i � e�i ��i� � � � �.�1/ne�i � .�i�/
n�
2
66666666666664
1
q1
:::
qn
p0
p1
:::
pn
3
77777777777775
D 0:
The rest is technical. If the number of interpolation points does not exceed the number of parameters,
which is 2n C 1, then the resulting problem can be solved by inverting the last 2n C 1 columns of the
counterpart of the matrix in the left-hand side of (1.33).
94 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
101
102
-80
-60
-40
-23.3
Fig. 5.5: Rational approximations of ˘.s/ D ��
˚
2e�s=.2s2 � 3s C 1/
For the DTC element in Fig. 5.1 this approach yields
˘intP;n.s/ D
„�0:451471.s�15:28403/
s2C5:6387sC10:9161if n D 2
1:23504.s2�4:29253sC57:3393/
.sC4:51376/.s2C7:08771sC24:8196/if n D 3
� 0:527108.s�45:0554/.s2�2:36514sC42:3681/
.s2C8:20247sC45:0126/.s2C11:3775sC35:3632/if n D 4
(5.9)
These transfer functions are stable, so no fine tuning at the end of the procedure is needed. However, the
derivations require extra effort and numerical calculations become problematic for large n (they fail for
n D 8 for the example above). The approximation errors are presented in Fig. 5.4 by red lines for the
approximant degrees n D 2; 3; 4. The H1-norm of the errors under all three n are a bit below of what we
have with the naıve approximation, although in this case the gain is not quite substantial.
5.3.3 Direct Pade
Another conceptually straightforward approach is to derive the Pade approximant directly to the function
˘.s/, without the use of the delay element as the approximation agent. If D D 0 in (5.1), then it would
make sense to use strictly proper Pade approximants, i.e. to select m < n in the procedure of ÷1.3.1.
As a matter of fact, applying to the DTC element in Fig. 5.1 under m D n � 1, this produces the same
approximant as that given by (5.9). However, in general these two methods result in different approxi-
mants. For example, if˘.s/ D ��
˚
2e�s=.2s2 �3sC1/
, then the interpolating Pade and direct Pade result
in
� 0:4773.s2 C 9:428s C 70:1/
.s C 4:444/.s2 C 6:946s C 24:32/and � 0:68479.s2 C 5:264s C 87:82/
.s C 5:472/.s2 C 8:994s C 35:49/;
respectively. The corresponding error magnitudes are presented in Fig. 5.5. The interpolating Pade pro-
duces a smaller H1-norm of the approximation error, 0:0685 vs. 0:0849 for the direct approximation. But
the latter method produces better approximations in low frequencies, namely for ! � 8:256.
5.3.4 Approach of Partington–Makila
Yet another approach to construct finite-dimensional approximations of DTC elements was proposed by
Partington and Makila in [58]. Its key observation is that the function ˘.s/ defined by (5.2) can be
equivalently expressed as
˘.s/ D C�.ŒsI � A�/e�A�B; where �.s/ ´ 1� e��s
sD ��
�1
s
�
: (5.1c)
Thus, we can approximate ˘.s/ via approximating the scalar function �.s/ by its Œn � 1; n�-Pade approx-
imant Rn�1;n.s/, or whatever other method, and then using
˘PaMa;n.s/ D CRn�1;n.ŒsI � A�/e�A�B (5.10)
5.4. LUMPED-DELAY APPROXIMATIONS (LDA) 95
as a rational approximation of ˘.s/. In fact, Rn�1;n.s/ can be calculated analytically via the Œn; n�-Pade
approximant of e��s as
Rn�1;n.s/ D �Pn�1.�s/
Qn.�s/;
where the degree-n polynomial Qn.s/ is given by (1.35) on p.15 and the degree-2b.n � 1/=2c polynomial
Pn�1.s/ D Q.s/ �Q.�s/s
Db.n�1/=2c
X
iD0
2
�n
2i C 1
�.2n � 2i � 1/Š
.2n/Šs2i ;
where b�c stands for the floor function. It should be emphasized that the dimension of the transfer function
˘PaMa;n.s/ in (5.10) is n dim.A/, rather than n, in general. Consequently, the order of this approximation
may be quite high even if n is chosen to be small.
For ˘.s/ as in Example 5.2 this approach yields (in this case A is scalar, so the order of this approxi-
mant is exactly n)
˘PaMa;n.s/ D
„4:41455
s2C4sC7if n D 2
0:735759.s2�2sC61/
.sC3:64437/.s2C5:35563sC19:4821/if n D 3
14:7152.s2�2sC43/
.s2C6:41516sC38:536/.s2C9:58484sC25:9757/if n D 4
(5.11)
whose magnitude approximation errors are presented in Fig. 5.4 by yellow lines. In all three cases of n
these approximation produces the lowestH1-norm of the approximation error. But it also produces larger
approximation errors in low frequencies. Thus, the comparison can be regarded as indecisive.
Remark 5.1 (stability of approximants). It should be emphasized that no general stability results for ap-
proximation methods presented in this whole section appears to be available. As such, the stability of the
resulting approximants is not guaranteed even when all unstable eigenvalues of A are canceled in the ap-
proximant, as additional poles in xC0 might appear. An exception is the analysis in [58] for the case where
the function �.s/ in (5.1c) is approximated via some alternatives of the Œn� 1; n�-Pade. For example, if
e�s ���s C 2n
s C 2n
�n
H) �.s/ � .�s C 2n/n � .��s C 2n/n
s.�s C 2n/n;
then the approximant in (5.11) is stable whenever n >�
maxs2spec.A/ Re s�
�=2. O
5.4 Lumped-delay approximations (LDA)
The third direction that we study is motivated by the integral form of ˘ in (5.1b) and the Newton–Cotes
quadrature (numerical integration) rules. A general Newton–Cotes formula of degree � 2 N, which
estimates a definite integral using a piecewise-polynomial approximation of the integrand, is of the form
Z b
a
f .t/dt ��
X
iD0
�if .ti/; where ti ´ a C i.b � a/=� (5.12)
for some sequence f�ig�iD0, which depends on the specific approximation of f . Examples of popular ap-
proximation schemes are the trapezoidal rule, in which f .t/ is approximated by piecewise-linear functions
connecting the points f .ti/ and f .tiC1/ and for which
ti tiC1 tiC2
H) �i D b � a2�
(
2 if i D 1; : : : ; � � 11 if i D 0; �
96 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
0 1 10 20 30
0
1
(a) Closed-loop time response (b) Frequency responses of ˘ & ˘�
Fig. 5.6: DTC element in Fig. 5.1 implemented via (5.13)
and Simpson’s rule (� should be even), in which f .t/ is approximated by piecewise-quadratic functions
connecting the points f .t2i /, f .t2iC1/, and f .t2iC2/ and for which
t2i t2iC1 t2iC2
H) �i D b � a6�
�1 if i D 0; �
2 if i even but neither 0 nor �
4 if i odd
Applications of Newton–Cotes formulae to (5.1b) lead to easily implementable and stable approximants,
although certain care should be taken to keep system-theoretic common sense in place.
5.4.1 Naıve use of Newton–Cotes formulae
Let us start with applying formula (5.12) to the integral term in (5.1b) mechanically. To this end, write the
integrand as f .t/ D C e�.sI�A/te�A�B D C eA.t��/Be�ts , for which
˘.s/ � ˘�.s/ ´�
X
iD0
�iC eA.�i ��/Be��i s �De���s; where �i ´ �
�i . (5.13)
This function is a train of (lumped) delays e��i s, so ˘� is referred to as a lumped-delay approximation
(LDA) of ˘ . Obviously, if �i are all bounded, then ˘� 2 H1 for all �, meaning that we have stability as
an intrinsic property of this approximation method.
Example 5.3. Apply the approach presented above to the system studied in Example 5.1 for � D 1 and
the trapezoidal integration rule. The LDA (5.13) yields then the approximant
˘�.s/ D �
2�C
��1X
iD1
�e�.1�i=�/�
�e�.�i=�/s C �
2�e��s;
which is sufficiently simple. Yet simulating the system in Fig. 5.1 for this choice and � D 10 results in
a slowly diverging oscillatory response, shown in Fig. 5.6(a), rather than in the expected red line. As a
matter of fact, increasing the number of partitions of the integration interval, �, does not result in a stable
closed-loop response (oscillations just become faster). A change of the numerical integration rule also
does not change this behavior qualitatively [73]. O
Although the closed-loop behavior in the example above might appear counter-intuitive at first sight, it
can in fact be easily explained from system-theoretic perspectives. To this end, it is important to remember
that we approximate an operator, or a function of s in terms of its transfer function. Because ˘�.s/ always
5.4. LUMPED-DELAY APPROXIMATIONS (LDA) 97
remains an H1-function, we may be only concerned with its behavior on the imaginary axis, i.e. with
the frequency response ˘�.j!/. And this behavior does not have bounded derivatives, e.g. the first time
derivative, which satisfies1
k Pf .t/k2F D kC eA.t��/.A � j!I/Be�j!tk2
F D kC eA.t��/ABk2F C !2kC eA.t��/Bk2
F ;
is unbounded as a function of ! unless C eA.t��/B � 0, which is only the case if ˘ D 0. As a result,
˘�.j!/ does not converge to˘.j!/ as � ! 1. In more classical control-engineering terms, the LDA as in
(5.13) attempts to approximate a system with a vanishing high-frequency gain (the integral term in (5.1b))
by a system with a non-decaying high-frequency gain (the sum in (5.13)). For instance, the magnitude
frequency responses of ˘ (in red) and ˘� (in cyan-blue) for the system in Example 5.3, presented in
Fig. 5.6(b), show clearly that the LDA approximation there is not a good fit for ˘.j!/ at high frequencies.
It can actually be shown that
lim sup!!1
j˘�.j!/j D ˘�.0/ D �.1 � e��/.e�=� C 1/
2�.e�=� � 1/�!1���! 1 � e�� D ˘.0/
in this case. Because the high-frequency gain of˘ vanishes, we have that k˘ �˘�k1 � ˘�.0/ > 1�e��
in this example as well.
It should be emphasized that the lack of convergence of the approximation error to zero cannot ex-
plain the unstable response in Fig. 5.6(a) alone. Instability in this example is actually a result of a lethal
combination of non-zero high-frequency gain of the approximation error�˘ D ˘�˘� and a poor robust-
ness of the system in Fig. 5.1 to high-frequency uncertainty [42]. Indeed, by presenting the implemented
˘� D ˘ � �˘ , the loop in Fig. 5.1 can be rearranged as the feedback interconnection of �˘ and the
control sensitivity, whose transfer function Tc.s/ D 2e� .�s C 1/=.s C 1/ and the high-frequency gain is
lim!!1jTc.j!/j D 2e� . The high-frequency gain of this loop equals then 2.e� � 1/ � 3:4366. This gain
is not strictly contractive, implying that the loop is extremely sensitive to high-frequency modeling errors,
viz. infinitesimal high-frequency modeling uncertainties, like loop delays, cause its instability (cf. Propo-
sition 2.2 on p. 24). This is exactly what happens in Example 5.3. A more robust system, e.g. that with a
strictly proper control sensitivity, would be less sensitive to the high-frequency approximation inaccuracy.
5.4.2 Proper use of Newton–Cotes formulae
Having understood the reason for the failure of approximation (5.13), we can come up with a remedy,
which turns out to be quite simple. All we need is to find a way to keep the high-frequency gain of an
approximation zero. To this end, note that
s
Z �
0
e�.sI�A/tdt D �Z �
0
eAt
�d
dte�st
�
dt D �e�.sI�A/tˇˇˇ
�
0C
Z �
0
�d
dteAt
�
e�stdt
D I � eA� e��s CZ �
0
e�.sI�A/tdtA;
where the second equality in the first line is obtained via integration by parts. Using the obvious equality
˘.s/ D .s˘.s/C ˛˘.s//=.s C ˛/, representation (5.1b) can be equivalently rewritten as
˘.s/ D 1
s C ˛C e�A�B C 1
s C ˛
Z �
0
C eA.t��/.˛I C A/Be�tsdt ��
D C 1
s C ˛CB
�
e��s (5.1d)
for every ˛ 2 R. Moreover, if ˛ > 0, this representation does not involve unstable cancellations and can
thus be used casually.
1The Frobenius norm is chosen for convenience. As all matrix norms are equivalent, this choice does not affect conclusions.
98 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
0 1
-1
0
1
(a) Impulse response of ˘
100
101
102
-60
-40
-20
-14
(b) Frequency response of ˘
0 1
-1
0
1
(c) Trapezoidal rule, ˛ D 7:8& � D 64
Fig. 5.7: LDA for Example 5.4 (trapezoidal rule, F.s/ D 1=.s C 7:8/, � D 64)
The second term in the right-hand side of (5.1d) still contains an integral of the same form as that in
(5.1b). But now this term is always postfiltered by the low-pass filter 1=.sC ˛/. Hence, a hight-frequency
mismatch in its approximating is filtered out. A general LDA formula for (5.1d) reads then as follows:
˘�;˛.s/ D 1
s C ˛C e�A�
�
I C .˛I C A/�0
�
B
C 1
s C ˛
��1X
iD1
�iC eA.�i ��/.˛I C A/Be��i s
��
D C 1
s C ˛C
�
I � .˛I C A/��
�
B
�
e��s (5.14)
for a sequence f�ig�iD0 determined by the concrete approximation scheme and with �i ´ i�=�, like in
(5.13). The only term of (5.14) that does not vanish at high frequencies is De��s , which originates in a
non-vanishing term of (5.1b). But this term is not an approximation, so it does not affect the approximation
error.
Not surprisingly, the use of (5.14) instead of (5.13) for the system in Example 5.3 results in an accurate
approximation. For the very same � D 10 as in Example 5.3, the approximation error k˘ � ˘�;˛k1 is
reduced by more than a factor of 30, to about 0:02 with ˛ D 1. Moreover, the approximation error
vanishes at high frequencies. The plots of ˘ and ˘�;˛, as well as the closed-loop step responses under
those choices, are then virtually indistinguishable. Moreover, the impulse responses of ˘ and ˘�;˛ for
this system are
�.t/ D0 �
1
and ��;˛.t / D0 �
1
(for � D 10 and ˛ D 1). Note that although the impulse response of ˘�;˛ appears to vanish for t � � , it
actually does not. The system ˘�;˛ is not FIR in general and its impulse response approaches zero only
asymptotically.
Example 5.4. The example considered throughout this chapter so far is quite simple, so any studied
method could be relatively easily applied. Consider now a more challenging problem of approximating
˘.s/ D ��
�80s C 808
.s � 1/.s2 � 25s C 2500/e�s
�
: (5.15)
The impulse and magnitude frequency responses of this system are shown in Figs. 5.7(a) and 5.7(b),
respectively. Oscillatory behavior and three unstable poles make these responses more complex and harder
5.4. LUMPED-DELAY APPROXIMATIONS (LDA) 99
to approximate than those of ��
˚
e��s=.s�1/
, indeed. In fact, rational approximations fail on this example,
in the sense that if accuracy requirements are relatively demanding, like those in (5.16) below, the order
of rational approximants is high and off-the-shelf numerics no longer work.
Our goal here is to approximate this ˘ by the LDA as in (5.14) with the trapezoidal rule so that
k˘ �˘�;˛k1 � 0:05 k˘k1 (5.16)
under the minimal possible number of lumped delays �. The filter crossover frequency ˛ is then manually
tuned, by a trial-and-error search, to end up with the smallest �. The best results are obtained then for
˛ D 7:8, in which case the minimal � D 64. The impulse response of the approximation is presented in
Fig. 5.7(c) by the cyan-blue line. It is now visibly non-FIR. As a result, ˛ should be sufficiently large to
provide sufficiently fast decay in t > � . O
5.4.3 Beyond Newton–Cotes
Although numerical integration schemes appear natural in the context of approximating (5.1b) and (5.1d)
by lumped delays, we do not have to apply them literally. It might be conceptually cleaner to start with a
rather general LDA of the form
˘�.s/ D F.s/
�X
iD0
Nie
��i s �De��s (5.17)
for � 2 N, a sequence f�ig such that 0 D �0 < �1 < � � � < ���1 < �� D � , a strictly proper and stable
F.s/, and matrix-valued coefficients Ni , i 2 Z0;� . In this formulation the low-pass filter is not constrained
to be of the form F.s/ D 1=.s C ˛/ as in (5.14), the delays �i are not constrained to be equidistant, and
the parameters Ni are not constrained to be equal to �iC eA.�i ��/B for some scalar �i . Ideally, we would
then like to determine these F , �i , and Ni , for all i 2 Z0;� , so that this approximation is close to ˘ given
by (5.1) in whatever meaningful sense. However, such a goal might be overly ambitious and not easily
attainable. Still, several directions, in which only a part of the parameter set is designed can be exploited
and they are briefly outlined below.
First, fix the number of delays in (5.17) and the filter F and assume that the delays �i D i�=�, i.e.
equidistant. As the design goal consider then the choice of Ni minimizing the H2-norm
k˘ �˘�k2: (5.18)
This problem can be solved [71] and in some situations (if F has a square “B” matrix) it can be solved
analytically. In particular, if F.s/ D 1=.s C ˛/ for some ˛ > 0, then the optimal choice is
N0 D 2˛
1� e�2˛�=�C.A� ˛I/�1.e.A�˛I/�=� � I /e�A�B; (5.19a)
Ni D 2˛C.A� ˛I/�1
�eA�=� C e�A�=� � 2e�˛�=�I
e˛�=� � e�˛�=�� I
�
eA�.i��/=�B; for all i 2 Z1::��1 (5.19b)
and
N� D 2˛e�˛�=�
1� e�2˛�=�C.A� ˛I/�1.I � e.A�˛I/�=�/e�A�=�B; (5.19c)
This choice always results in an FIR approximant, which is attained by an appropriate choice of N� .
Example 5.5. Return to the system studied in Example 5.4 and the approximation criterion (5.16). Now,
try to attain it with theH2-optimal choice of Ni from (5.19). It turned out that the best results are obtained
100 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
0 0.672 0.707 0.914 1
-1
0
1
(a) Uniform f�i g, ˛ D 3:5 and � D 58
0 0.264 0.496 0.748 0.932 1
-1
0
1
(b) Varying f�i g, ˛ D 0:2 and � D 14
0 0.5 0.74 0.824 0.936 1
-1
0
1
(c) Varying f�i g, ˛ D 3:5 and � D 8
Fig. 5.8: Impulse responses for the general LDA in (5.17)
for ˛ D 3:5, in which case the smallest � D 58. This is about 10% below of what we have with the
trapezoidal rule in Example 5.4 and the resulting ˘� , whose impulse response is shown in Fig. 5.8(a),
has the same structure from the implementation point of view. In other words, the gain purely due to
the use of problem-oriented tuning algorithm, which is better suited for the problem than general-purpose
Newton–Cotes schemes. O
The next step is to optimize the delays pattern f�ig. Benefits of using a nonuniform split are visible
already in Fig. 5.8(a). Indeed, it can be seen that at three points, namely for �i 2 f0:672; 0:707; 0:914g,
the jumps in the impulse response are virtually invisible. This indicates that the corresponding Ni ’s are
close to zero, so these delays, whose indices are 39, 41, and 53, can be eliminated without affecting the
approximation performance.
It turns out that optimizing the delay pattern can be done rather efficiently if those delays are restricted
to the finite set fi�=~g~iD0 for a given ~ 2 N. This does not entail any practical loss of generality, because
the grid can be made arbitrarily dense and the implementation of (5.17) would be more efficient in this case
anyway. We may then pose the problem of minimizing cost (5.18) under the constraint that the number
of delays � � ~ is fixed. A combinatorial search over all possible sets of � � 1 “internal” delays (i.e.
excluding 0 and �) would require�
~�1��1
�
combinations to be checked, which is not quite practical. Yet in
the case when the “B” matrix of F.s/ is square, there is an approach that requires only O.�~2=2/ checks,
see [71] for details.
Example 5.6. Let us discuss the application of the approach outlined above to the problem studied in
Examples 5.4 and 5.5. It turns out that the choice ˛ D 0:2 is better in this case. Consider the case of
~ D 250, meaning that the minimal distance between two delays is set to 0:004. This results in � D 14,
still maintaining (5.16), see Fig. 5.8(b) for the corresponding impulse response. Comparing this result
with the uniform split results of Example 5.5, we can see a clear advantage of splitting the interval Œ0; � �
non-uniformly. In our example this yields a more than fourfold improvement. This is easy to explain from
the form of the impulse response of ˘ in Fig. 5.7(a). During the first three quarters of the interval Œ0; � �
the variations of �.t/ are relatively minor, so it is sufficient to have only three jumps in that interval. In
the final quarter of Œ0; � �, the impulse response �.t/ is considerably more oscillatory, so a more dense grid
is required. O
Another possible direction is to minimize directly the H1-norm of the approximation error, together
with the delay pattern f�ig. Going this direction drifts us even further from the analytic solution. It
is possible to address the problem by numerical optimization methods. Although the problem is not
even convex, some trick, known as the `1-norm heuristic [6], can be used and the result can be seen in
Fig. 5.8(c). Going this way we end up with � D 8, which is the eightfold improvement over the direct
Newton–Cotes approach accompanied by the trapezoidal rules, with which we started in Example 5.4.
5.5. CODA 101
5.5 Coda
Out ultimate goal is to approximate the controller, so we should be less concerned with the accuracy of
approximating ˘ and more with the accuracy of approximating the whole controller. And even more so
with the ability to reproduce the closed-loop properties that were provided by the original DTC element.
One day it should be written . . .
102 CHAPTER 5. IMPLEMENTATION OF DTC-BASED CONTROLLERS
Chapter 6
Robustness to Delay Uncertainty
UP TO THIS POINT, delays in studied analysis and design problems were mostly assumed to be known
and constant. These assumption might not be realistic in applications. In fact, in all examples con-
sidered in Section 1.2 delays vary in course of operation due to variability of various system parameters,
such as the speed of rolls in Fig. 1.2(a) or the conveyor belt in Fig. 1.2(b), unevenness of communication
delays in networked applications, jitter in computation time, et cetera. In same of such cases variations of
loop delays can be taken into account in the design process, see e.g. Remark 3.2 on p. 48. But in many
situations actual values of delays are not even known, or only known to lie in some interval. In such situa-
tions robustness analysis, i.e. the analysis of the sensitivity of a system to uncertainty in its delay element,
is the approach to explore. This is the direction studied in the current chapter.
6.1 Delay margin
Consider the general interconnection presented in Fig. 6.1, where �ı � ��0 is a constant deviation from
a nominal delay, say �0 � 0. Unlike the analysis in Chapter 2, we do not limit G to be finite dimen-
sional, primarily because it can include this nominal delay. Assuming that the nominal system, i.e. that
corresponding to �ı D 0, is stable, the delay margin �d is defined as the the smallest deviation of the
loop delay from nominal value, under which the system becomes unstable. If the system is stable for all
�ı 2 Œ��;1/, then we say that �d D 1 or that the system is delay-independent stable.
In the case when G´w is SISO, this is a classical control notion, frequently defined as
�d D �ph
!c
; (6.1)
where �ph (in radians) is the phase margin of the loop G´w and !c (in radians per time unit) is the
crossover frequency, i.e. the frequency ! at which jG´w.j!/j D 1. However, this definition is not always
accurate. First, as we already know from Proposition 2.2 on p. 24, if G´w is finite dimensional and its
high-frequency gain jG´w.1/j > 1, then the system is unstable for all �ı > 0. Thus, the system with
�
G´w G´u
Gyw Gyu
�
xD�ı
w´
y u
Fig. 6.1: General single-delay interconnection with uncertain delay
103
104 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
-270 -180 -73.4 -50.4 0 33.8
-12
-6
0
6
(a) Nominal loop
-270 -200.5 -73.9 12.4
-12
-6
0
6
(b) Loop with �ı D 1=2
Fig. 6.2: Nichols plots of the loop frequency response for G´w.s/ D 6.s2 C 0:2s C 0:01/=.s.s C 2/2/
G´w.s/ Dp2s=.s C 1/ has a zero delay margin, even though formula (6.1) in this case would yield
�d D 1:25�=1 � 3:927. Second, there are systems with several crossover frequencies, for which the
phase margin and the corresponding crossover frequency are not a good indicator of delay tolerance. For
example, consider a system with G´w.s/ D 6.s2 C 0:2sC 0:01/=.s.sC 2/2/, whose Nichols plot is shown
in Fig. 6.2(a). The phase margin in this case is �ph � 1:86 [rad], measured at the crossover frequency
!c;1 � 0:015 (marked by the downward triangle). Formula (6.1) yields then �d � 121:16. But this is
not even close to the smallest destabilizing delay in this case. For example, Fig. 6.2(b) shows the loop
gain under �ı D 1=2, which clearly yields an unstable system. The reason is that there are two more
crossover frequencies, viz. !c;2 � 0:746 (the upward triangle) and !c;3 � 5:239 (the leftward triangle).
The distances from the critical point at those frequencies are larger than that at !c;1, so they do not count
in the calculation of the phase margin. However, these crossover frequencies, especially !c;3, are higher
than !c;1. As such, phase lags due to the delay at those crossovers are larger. In particular, the delays
at which each one of these points crosses the critical point are �2 D �ph;2=!c;2 � 3:73=0:746 � 5 and
�3 D �ph;3=!c;3 � 2:26=5:239 � 0:432. Thus the true delay margin for this system is �d � 0:432.
By this logic, the delay margin with respect to positive �ı should be calculated as
�d D
˚0 if lim sup!!1jG´w.j!/j � 1
mini
�ph;i
!c;i
otherwise(6.2)
where �ph;i is the angular distance from the critical point in the negative angle direction at the crossover
frequency !c;i . The delay margin with respect to negative �ı is calculated similarly, modulo the replace-
ment of the phase margins with angular distances from the critical point in the positive direction, denote
them Q�ph;i , and the constraint that such delay margin must be lowerbounded by the nominal �0.
6.1.1 Bounds on the achievable delay margin
Consider now the particular case of the system in Fig. 6.1 under G´w D PK, where P is a given system
(plant) and K is a controller, see (1.25) on p. 11 and explanation there. It is of interest to understand, what
is the maximum delay margin, attainable in this system by K. If P 2 H1, then the answer is obviously
�d D 1. This is what we get under the choice K D 0. The question becomes nontrivial for unstable P .
Bounds on the attainable delay margins for several classes of unstable plants were derived in [39]. Some
of these results are presented below.
We start with a result, establishing that unstable poles at the origin do not impose limitations on the
achievable delay margin.
6.1. DELAY MARGIN 105
Proposition 6.1. If P.s/ has no poles in xC0 n f0g, then any �d is attainable by a stabilizing K.
Proof (outline). It will be shown in ÷6.2.3 that the delay margin is lowerbounded by the reciprocal of the
H1-norm of the transfer function sT .s/, where T is the complementary sensitivity function corresponding
the the zero-delay case. A controller attaining an arbitrarily smallH1-norm of sT .s/was designed in [39,
App. C].
But any pole in xC0 n f0g imposes hard constraints on the attainable delay margin. The result below
proves that for a real pole:
Proposition 6.2. If P.s/ has a pole at s D a > 0, then
�d <2
a:
This bound is tight if this pole is the only pole of P.s/ in xC0 and P.s/ is minimum-phase.
Proof. Consider first the stabilization problem for the system in Fig. 2.5 with xR˛.s/ ´ .�sC˛/=.sC˛/,
under the very same G´w . Because xR˛.s/j˛D1 D xD� .s/j�D0, any controller stabilizing this system under
˛ D 1, does that for the system in Fig. 6.1 under �ı D 0. Then, as ˛ decreases, roots of the characteristic
polynomial move continuously along corresponding loci. What we know is that at ˛ D a the system is
unstable, just because s D a is then a root of its characteristic polynomial no matter what K is (unstable
cancellations). Therefore, roots must cross the imaginary axis, say at s D ˙j!cross for some !cross > 0,
for at least one ˛ D ˛cross > a regardless what K stabilizing the system for ˛ D 1 is used. But the
arguments of Rekasius, discussed in ÷2.1.6, imply that there must then exist a minimal delay �cross > 0,
for which there are pure imaginary roots of the characteristic quasi-polynomial associated with the system
in Fig. 6.1 at the very same s D ˙j!cross. This corresponding destabilizing delay, cf. (2.13), satisfies
˛cross
!cross
D cot�cross!cross
2H) �cross D 2
!cross
arctan!cross
˛cross
<2
˛cross
<2
a
(the first upper bound is approached as !cross # 0). This yields the bound of the proposition.
To show that this bound is tight whenever P.s/ has no other unstable poles and no nonminimum-phase
zeros, write
P.s/ D 1
s � a Pinv.s/
for stable and minimum-phase Pinv.s/. In this case the controller Ka.s/ D �P�1inv .s/kp.TdsCa/ renders the
stabilization problem equal to that for the first-order Pa.s/ ´ 1=.s C a/ and the PD Ka D �kp.Tds C a/.
But this is essentially the stabilization problem studied in ÷3.1.2. Specifically, it was shown there that a PD
controller approaches the upper-bound delay margin 2=a as kp # 1 and Td " 1. Although this controller is
not proper, its slight modification can attain the same bound as well, see [39, Rem. 8].
The result below extends this bound for the case of complex and pure imaginary poles.
Proposition 6.3. If P.s/ has poles at s D�
� ˙ jp
1� �2�
!n for some 0 � � < 1 and !n > 0, then
�d <
p
1 � �2
!n
�
� C 2max
��
p
1� �2; arctan
p
1 � �2
�
��
D�0:652 1
2=!n
3:69=!n
2�=!n
This bound is tight if these poles are the only poles of P.s/ in xC0 and P.s/ is minimum-phase.
106 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
1
se�.�0C�ı/s kp
�0
1 � �0s � e��0s
s
rQeu
d
y
n Qy
-
Fig. 6.3: Smith controller with proportional primary controller
Proof (outline). The proof for the case of � > 0 follows the ideas of that of Proposition 6.2, just with
the rational substitution of the form xR˛.s/ D .� � jp
1 � �2 � s=˛/=.� C jp
1� �2 C s=˛/, which has
complex parameters, see [39, Thm. 9] for details. The case of � D 0 can be addressed by somewhat
simpler arguments. Namely, in this case the loop has an infinite magnitude at ! D !n and a strictly
contractive high-frequency gain under any stabilizing controller. This implies that there is a crossover
frequency !c > !n with some �ph 2 .0; 2�/. Hence, the delay margin �d � �ph=!c < 2�=!n. Finally,
controllers approaching the upper bound of the proposition are presented in [39, Sec. IV].
It should be emphasized that controllers maximizing the delay margin tend to render the crossover
frequency !c # 0. In other words, maximizing �d alone, without imposing additional constraints on the
crossover frequency or related feedback properties, does not make much engineering sense.
Remark 6.1 (time-varying controllers). Curiously, and perhaps somewhat counterintuitively, there are no
constraints on the attainable delay margins, even for an unstable LTI P , if time-varying linear controllers
are allowed [40]. Still, the control strategy achieving that, which hinges on the ability to estimate both the
state of P and its appropriate prediction in finite time, is not quite practical either. O
6.1.2 Delay margins of DTC-based loops: case study and general considerations
Consider now the system in Fig. 6.3, which is an integrator plant controlled by a DTC-based controller
with a proportional primary controller, whose gain is normalized by the nominal loop delay �0. The
DTC element ˘ is effectively the Smith predictor for an integral plant and the nominal delay, shifted
by a constant to guarantee that the static gain of the DTC element ˘.0/ D 0, so that the static gain of
the overall controller equals kp=�0 (this is an idea of Watanabe and Ito from [77]). The classical Smith
controller structure can be recovered by shifting the internal controller loop by the constant ��0, leading to
the primary controller Œkp=.1�kp/�=�0. Also note that the internal controller loop is well posed iff kp ¤ 1.
The actual loop delay is assumed to be constant yet uncertain, of the form �0 C �ı for some �ı � ��0.
The finite-dimensional system QP for which the primary controller is designed has the transfer functionQP .s/ D .1 � �0s/=s, so that its characteristic polynomial under the primary controller kp=�0 is
Q�.s/ D �0s C kp.1� �0s/ D �0.1� kp/s C kp:
This polynomial is Hurwitz under a well-posed controller loop iff kp 2 .0; 1/, which is assumed hereafter.
Thus, the low-frequency gain of the controller is limited by 1=�0. The nominal closed-loop complementary
sensitivity function is then
T .s/ D kp
�0.1� kp/s C kp
e��0s;
whose bandwidth !b D Œkp=.1� kp/�=�0 is an increasing function of kp.
Consider now the stability margins attained in the system in Fig. 6.3 for the actual, not equivalent
delay-free, system with the loop transfer function
L.s/ D kpe��0s
�0.1� kp/s C kp.1� e��0s/:
6.1. DELAY MARGIN 107
0.5 0.75 1
2
3
(a) Gain margin, �g
0.5 0.75 1
60
64
68
(b) Phase margin, �ph
0.5 0.75 0.87 0.91 1
-1
-0.56
0
0.67
1.41
2.5
(c) Normalized delay margin, �d=�0
Fig. 6.4: Stability margins as functions of the controller gain kp
The gain and phase margins of this loop, shown in Fig. 6.4(a) and 6.4(b), respectively, as functions of the
normalized controller gain kp (see [20, App. A] for details of their derivations), are expectably monotoni-
cally decreasing and continuous. As a matter of fact, as kp # 0, the gain margin grows unbounded and the
phase margin approaches 90ı, whereas as kp " 1, they reduce to respectable �g D 2 and �ph D 60ı. The
delay margin, i.e. the maximum deviation �ı from the nominal loop delay �0, is shown in Fig. 6.4(c). It is
also a monotonically decreasing function of kp, unbounded as kp # 0, but with discontinuities. For exam-
ple, at kp � 0:749 the delay margin for positive �ı drops from 1:4052 to 0:6683, i.e. by more than a factor
of two, and delay margin for negative �ı jumps from its highest value of �1 (for which the delay-free
system is also stable with the DTC-based controller) to �0:5622. Moreover, the delay margin vanishes as
kp " 1, which is also qualitatively different from the behavior of �g and �ph.
The rationale behind this behavior of �d becomes apparent if we take a look at the loop frequency
response around the borderline value kp � 0:749. At a slightly lower gain of kp D 0:748, there is only one
crossover frequency at !c � 0:775=�0, see Fig. 6.5(a). Hence, the delay margin is calculated by (6.1) as
�d � 1:085=.0:775=�0/ D 1:4�0. But at kp D 0:749 another (double) crossover frequency !c;2 � 5:106=�0
pops up, see Fig. 6.5(b). Hence, the delay margin is calculated by (6.2) as
�d D max
��ph;1
!c;1
;�ph;2
!c;2
�
� max
�1:085
0:775=�0
;3:412
5:106=�0
�
� 0:668�0
(the phase margin and the first crossover frequency are virtually the same as those for kp D 0:748). As
!c;2 is larger than !c;1 by more than a factor of six, this delay margin drops dramatically, even though
�ph;1 < �ph;2. and we have the jump at around kp D 0:75. Also, instability can now be caused by a
-700 -540 -360 -180 -117.8
-12
-9
-6
-3
0
3
(a) kp D 0:748 and !c � 0:775=�0
-700 -540 -344.5 -180 -117.8
-12
-9
-6
-3
0
3
(b) kp D 0:749, !c;1 � 0:775=�0, and !c;2 � 5:106=�0
Fig. 6.5: Nichols plots of the loop frequency response for the system in Fig. 6.3
108 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
-900 -720 -630 -540 -360 -270 -180 0
-12
-6
0
6
12
Fig. 6.6: Closed-loop bandwidth contour on the Nichols chart
decrease of the delay, as the plot can also cross the critical point if a phase lead of Q�ph;2 � 2:871 is added
at the second crossover !c;2. Note that as kp increases even further, more and more crossover frequencies
pop up, resulting in a further deterioration of the delay margin. In the limit, as kp " 1, the loop transfer
function approaches
L.s/ D e��0s
1 � e��0s;
which yields the infinite-bandwidth T .s/ D e��0s , but an infinite and unbounded sequence of crossover
frequencies f!c;i g satisfying cos.!c;i�0/ D 1=2. Consequently, the delay margin of this loop indeed
vanishes, even though the gain and phase margins are reasonably high.
A natural question in this respect is whether this kind of behavior, namely the proliferation of the
number of crossover frequencies, is generic in the context of dead-time compensation. The answer is yes,
if the primary controller attempts to increase the closed-loop bandwidth beyond certain level. Specifically,
define the closed-loop bandwidth as the largest !b for which jT .j!/j � 1=p2 in the whole frequency range
! 2 Œ0; !b�. The loop frequency response L.j!/ lies in the shaded area of the Nichols chart in Fig. 6.6,
determined by the corresponding M -circle, for all ! 2 Œ0; !b�. It is readily seen that if jT .0/j > 1=p2 and
the closed-loop system is stable, then there is more than one crossover frequency whenever one of the two
conditions below holds:
1. the phase lag of the open-loop system exceeds 270ı at ! D !b, i.e. argL.j!b/ < �3�=2,
2. L.s/ has at least two unstable poles, so its frequency response must encircle the critical point.
Moreover, additional phase lag or / and unstable poles increase the number of crossover frequencies. Clas-
sical control has some accepted rules of thumb, saying that the presence of a loop dely imposes limitations
of the attainable bandwidth, see [18, ÷8.6.2]. However, such limitations are less visible in DTC-based set-
tings. Indeed, as follows from (3.13) on p. 44, the complementary sensitivity transfer function in a general
DTC-based setup, like that in Fig. 3.5, is
T .s/ D P.s/
1C QP .s/ QK.s/e��0s µ T�0
.s/ e��0s;
where T�0is assigned by a delay-free design of the primary controller QK. Thus, the closed-loop bandwidth
does not depend on the delay element, as jT .j!/j D jT�0.j!/j, so one might be tempted to increase it
without accounting for the phase lag of the plant at !b due to the delay. As a result, either the loop has a
large phase lag at !b or the controller introduces unstable modes to add enough phase lead and compensate
for the phase lag introduced by e�j!�0 . In either case, extra crossover frequencies arise and �d deteriorates.
Remark 6.2 (closed-loop bandwidth). A moral of the discussion above is that it might be safer to define the
closed-loop bandwidth via the gain of the sensitivity function, say as the largest !b for which jS.j!/j � �b
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 109
�
G´w G´u
Gyw Gyu
�
xD�ı
w´
y u
(a) Original system, �ı 2 Œ0; N��
��
G´w G´u
Gyw Gyu
�
�
w´
y u
(b) Covering system, xD�ı� �
Fig. 6.7: Embedding into a wider class of uncertain systems
in the whole range ! 2 Œ0; !b� for some �b 2 .0; 1/. If �b is sufficiently small, say �b � 1�1=p2 � 0:293,
then this measure requires not only the gain of T to be close to one (because 1 � �b � jT .j!/j � 1C �b,
by the triangle inequality), but also its phase to be close to zero. Thinking this way may help to resist the
temptation of using an overly aggressive design of the primary controller. O
6.2 Embedding uncertain delays into less structured uncertainty classes
A precise analysis of the delay margin might not be an easy problem to address and even harder to incor-
porate into control design problems. A pragmatic alternative is to analyze sensitivity to uncertain delay via
general-purpose robustness tools, which are quite well understood and can be embedded into optimization
design frameworks. This is especially so for robust stability analises under unstructured, and sometimes
atructured, complex uncertainty. In this section methods of the embedding of uncertain delays into wider
uncertainty formalisms are studied.
6.2.1 Underlying idea
Let us return to the general case presented in Fig. 6.1 and consider the problem of finding conditions,
under which it is stable for all �ı from a given interval, i.e. robustly. Without loss of generality, we may
assume that this interval is �ı 2 Œ0; N�� for a given N� > 0. Otherwise, the lower bound on �ı can just be
absorbed into the nominal delay, �0.
A rather obvious—but nevertheless important—observation is that the system in Fig. 6.7(a) is robustly
stable if yet another system, that in Fig. 6.7(b), is stable for every � 2 �N� , where �N� is a class of systems
containing all admissible delay elements, i.e. such that xD�ı2 �N� for every �ı 2 Œ0; N��. It is supposed that
the set of all admissible delays xD�ıis a proper subset of �N� . The rationale behind such an expansion of the
family of considered systems is that the robust stability problem for the system in Fig. 6.7(b) for� 2 �N� is
easier to analyze than that for � D xD�ı. The price paid for this simplification is that any resulting stability
condition is only sufficient, i.e. potentially conservative. Namely, the robust instability of the system in
Fig. 6.7(b) does not imply that of the original system in Fig. 6.7(a).
Thus, the basic idea is to embed a system of interest into a wider class of systems, whose analysis
is simpler. A key to the success of this approach is the choice of the uncertainty set �N� . On the one
hand, it should cover the original uncertain element xD�ıtightly to reduce conservatism. On the other
hand, it should result in a simple, and preferably design-friendly, analysis. The quest for an easily tunable
and transparent tradeoff here is perhaps still under way, with no clear winner and a plenty of barely
discriminable results [80]. It is probably safe to claim that the conservatism can be effectively eliminated
by increasing the computational complexity. However, such low-conservatism methods tend to be limited
to analysis problems, where the controller is given. Design-friendly approaches normally use simpler
uncertainty sets and are more conservative.
110 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
T´w
�
(a) Unstructured uncertainty
T´w
ıIm
(b) Structured uncertainty (one block)
MT´wM �1
ıIm
(c) Scaled structured uncertainty
Fig. 6.8: Robust stability setups for m �m stable � and scalar stable ı
6.2.2 Preliminary: robust stability with respect to norm-bounded uncertainty
Arguably, the handiest class of uncertaities is the class of unstructured norm-bounded ones. The simplest
such class, which can be used as a building block for more sophisticated extensions, is the unit ball inH1,
Bm�m1 ´
˚
� 2 Hm�m1 j k�k1 � 1
(if the dimension is irrelevant of clear from the context, we write B1). In the LTI setting, this set describes
stable systems �, whose frequency response is contractive at every frequency, i.e. k�.j!/k � 1, 8! 2 R.
With some abuse of notation, we also use the nomenclature B1 for possibly time-varying or even nonlinear
systems, which are stable as operators L2.RC/ ! L2.RC/ and whoseL2.RC/-induced norm is contractive.
Consider now the internal stability interconnection in Fig. 6.8(a) for a known m � m LTI system T´w
and an uncertainty element�, which is only know to belong to Bm�m1 . The following result is fundamental:
Theorem 6.4 (Robust Stability Theorem). The feedback interconnection in Fig. 6.8(a) is internally stable
for all � 2 Bm�m1 iff T´w 2 H1 and kT´wk1 < 1.
Proof. Sufficiency follows by the celebrated Small Gain Theorem, see [79, ÷4.4.2], [11, Sec. III.2], or [9,
Thm. 9.1.7] for example. The stability of T´w is obviously necessary, because � D 0 is admissible. To
prove the necessity of the norm bound on T´w , assume, on the contrary, that kT´wk1 D � 1. This
implies that at least one of the following two conditions holds: (i) lim sup!!1kT´w.j!/k D or (ii) there
is !0 2 R such that kT´w.j!0/k D . If the high-frequency gain of T´w is not strictly contractive, the
system is destabilized by the admissible � D xD� for some � > 0, see ÷2.1.3. If condition (ii) holds, then
there are unitary vectors u0; v0 2 Cm such that u0
0T´w.j!0/v0 D (obtained via SVD). If !0 D 0, then u0
and v0 are real and choose �.s/ D v0u00= . If ! ¤ 0, consider an LTI � 2 H1 with the transfer function
�.s/ D 1
2
4
jv01je�s.arg xv01/=!0
:::
jv0mje�s.arg xv0m/=!0
3
5�
ju01je�s.arg u01/=!0 � � � ju0mje�s.arg u0m/=!0�
;
where xx stands for the complex conjugate of x 2 C and the argument of x is assumed to be nonnegative.
In either case k�k1 D 1= � 1, so that � is admissible, and �.j!0/ D v0u00= . Hence, we have that
u00
�
I � T´w.j!0/�.j!0/�
D u00 � u0
0T´w.j!0/v0u00= D 0;
i.e. I � T´w.s/�.s/ has a singularity on the imaginary axis. But this implies that the closed-loop system
is unstable as it has a pure imaginary pole. Thus, there is a destabilizing admissible � and the system is
not robustly stable.
The result above implies that robust stability with respect to the set of contractive uncertainty elements
is equivalent to anH1 optimization problem. This renders it a convenient framework for analysis. Indeed,
the bound on the H1 norm can be verified by several different approaches, such as checking kT´w.j!/kover a chosen frequency grid, solving an eigenvalue problem associated with a Hamiltonian matrix built
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 111
from a realization of T´w.s/ [44, Prop. 4.17] (if T´w is finite dimensional) or a Linear Matrix Inequality
resulting from the KYP lemma [44, Thm. 4.18] (also for a finite-dimensional T´w), et cetera. Moreover,
casting the problem as a bound on the H1-norm of a known (“certain”) facilitates casting robust design
as a standard H1 problem of the form presented in Fig. 4.1(a) on p. 68, just remove the “�” block from
the system in Fig. 6.7(b) and connect u with y via a controller K.
We may add more structure into the uncertainty set by considering the setup in Fig. 6.8(b) for a scalar
ı 2 B1�11 . Such a configuration is a particular case of the general � (structured singular values) framework,
where structured uncertainty is described by block-diagonal elements [54]. The case of only one “repeated
scalar” block is sufficient for our purposes as it fits well the embedding procedure of ÷6.2.1 in the single-
delay case, when the signals ´ and w in Fig. 6.7(a) are not scalar.
First, it should be clear that ıIm 2 Bm�m1 for all ı 2 B1�1
1 . Hence, Theorem 6.4 can be used to end up
with sufficient robust stability conditions. However, such conditions might be arbitrarily conservative, as
illustrated by the example below.
Example 6.1. Consider the system in Fig. 6.8(b) for the static
T´w.s/ D�
˛ ˇ
0 ˛
�
; for ˛; ˇ 2 R
and an LTI ı 2 B1�11 . Standard imaginary-axis crossing arguments yield that the closed-loop system is
stable iff I2 � ı.j!/T´w.j!/ is nonsingular for all ! 2 R. This, in turn, is equivalent to the condition that
j˛j < 1, which does not depend on ˇ. At the same time, Theorem 6.4 yields the robust stability condition
�
˛ ˇ
0 ˛
� < 1 ”
p
4˛2 C ˇ2 C jˇj2
< 1 ” j˛j <p
1� jˇj:
This condition is non-conservative only if ˇ D 0. The degree of conservatism of this condition, i.e. the
gap between it and the actual condition j˛j < 1, increases with jˇj and the problem becomes unsolvable
at all if jˇj � 1. O
Conservatism can be reduced by noticing that ıI D M�1.ıI /M for all LTI and nonsingular M . If M
is bi-stable, then the internal stability of the system in Fig. 6.8(b) is equivalent to that in Fig. 6.8(c). The
freedom in the choice of the multiplier M (cf. the discussion in ÷3.4.4) can then be used to reduce the H1norm of MT´wM
�1. The following result, which is a particular case of the general � theory [54], holds:
Proposition 6.5. The following statements are equivalent:
1. The feedback interconnection in Fig. 6.8(b) is internally stable for all ı 2 B1�11 .
2. T´w 2 H1 and there is M 2 Hm�m1 such that M�1 2 Hm�m
1 and kMT´wM�1k1 < 1.
3. T´w 2 H1 and �.T´w.j!// < 1 for all ! 2 R.
In the SISO case, i.e. if m D 1, the conditions of Proposition 6.5 clearly reduce to that of Theorem 6.4.
Example 6.2. Return to Example 6.1. The third condition of Proposition 6.5 obviously reads j˛j < 1. To
apply the second condition, let us select M D�
� 00 1
�
, which is obviously bi-stable for all � 2 R n f0g. The
stability condition reads then
�
� 0
0 1
� �
˛ ˇ
0 ˛
� �
1=� 0
0 1
�
D
�
˛ �ˇ
0 ˛
� < 1 ” j˛j <
p
1� j�ˇj:
As � is almost arbitrary (nonzero), the quantity j�ˇj can be made arbitrarily small, so we can recover the
non-conservative condition j˛j < 1, again. O
112 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
The scaling in Proposition 6.5 requires dynamic systems in general. Finding such a system is in
general a matter of using a finite number of matrix parameters over a chosen frequency grid. This is not
particularly elegant and leads normally to rather high-order conditions. However, if only static scaling
parameters M are used, the contractiveness of MT´wM�1 can be cast as a standard LMI (as a matter of
fact, the use of only static scaling is enough if ı is allowed to be time varying [65]).
Proposition 6.6. Let T´w.s/ D DCC.sI �A/�1B for a Hurwitz A be an m�m transfer function. There
is M 2 Rm�m such that kMT´wM
�1k1 < 1 iff there are X; Y > 0 such that�
A0X CXAC C 0YC XB C C 0YDB 0X CD0YC D0YD � Y
�
< 0: (6.3)
Proof. Because
MT´w.s/M�1 D
�A BM�1
MC MDM�1
�
;
the condition kMT´wM�1k1 < 1 is equivalent [44, ÷4.3.4] to the existence of X D X 0 > 0 such that
�
A0X CXAC C 0M 0MC XBM�1 C C 0M 0MDM�1
M�0B 0X CM�0D0M 0MC M�0D0M 0MDM�1 � I
�
< 0:
This condition does no change it its left-hand side is pre-multiplied by�
I 00 M 0
�
and post-multiplied by�
I 00 M
�
(as M is nonsingular). This yields (6.3) by denoting Y D M 0M > 0.
Return now to the uncertainty set B1. It might appear that in most situations this is a rather rough
description of uncertain elements. Indeed, it is hard to imagine a realistic uncertain element, whose
frequency response lies in the unit disk in the complex plane in all frequencies. A finer model should
accommodate different modeling mismatch sizes over different frequencies, like having smaller errors
over low frequencies and larger—over high. Moreover, in many situations uncertain elements are not
centered at the origin, but rather at some nonzero nominal value, which might also drift with the frequency.
It happens, however, that the set B1 can easily generate such models as well. In a sense, it serves a
base for many more sophisticated, and more realistic, uncertainty models, whose properties are shaped by
frequency-dependent weights. A general, albeit quite abstract, model for such uncertain sets is
Fu.W;Bm�m1 / ´
˚
� j 9�0 2 Hm�m1 such that � D Fu.W;�0/
(6.4)
for a given weight W . Likewise, the set Fu.W;B1�11 Im/ can be used for structured uncertainty elements.
Several concrete simple examples of weight, which are more tangible than the abstract model, are pre-
sented below:
Constant scaling: the choice
W D�
0 ˛I
I 0
�
H) Fu.W;B1/ D B1˛;
which is merely a uniform scaling of the uncertain element by ˛ > 0. In the SISO case, frequency
responses of elements from this set lie in a disk of the radius ˛ centered at the origin at all frequencies,
i.e. in ˛xD.
Frequency-dependent scaling: the choice
W D�
0 W˛
I 0
�
H) Fu.W;B1/ D B1W˛;
which scales it by a dynamic, i.e. frequency dependent, weight W˛ . In the SISO case, frequency
responses of elements from this set at each frequency ! lie in a disk of the radius jW˛.j!/j centered
at the origin, i.e. in jW˛.j!/j xD.
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 113
� QG´wQG´u
QGywQGyu
�
ıIm
w´
y u
Fig. 6.9: Embedded robustness setup for Fig. 6.1 with ı 2 B1�11 and QG D W ? G
Shifted frequency-dependent scaling: the choice
W D�
0 W˛
I W~
�
H) Fu.W;B1/ D W~ C B1W˛;
which effectively assumes the “nominal model” of the uncertain element at W~ and scales the devia-
tion from this nominal model by a dynamic, i.e. frequency dependent, weight W˛ 2 H1. In the SISO
case, frequency responses of elements from this set at each frequency ! lie in a disk of the radius
jW˛.j!/j centered at W~.j!/, i.e. in W~.j!/C jW˛.j!/j xD.
In general, the mapping � 7! Fu.W;�/ can be viewed as a Mobius transformation, which maps the
unit disk xD into yet another disk on the complex plane C, provided jW11.j!/j < 1 for all !. The latter
condition just says that 1 62 W11.j!/xD. If it does not hold, then a disk may be transformed to a half plane
or to the exterior of a disk. If W11 ¤ 0, the transformed disk may be rotated and warped, in the sense
that the distance between points is not preserved under the transformation (these properties can perhaps
be exploited to reduce the conservatism of the approach, although I am not aware of such results).
The analysis of systems with the uncertainty model (6.4) is not conceptually different from that of
systems with the plain unit disk model. Indeed, the feedback connection of T´w and Fu.W;�/ is equivalent
to the feedback connection of Fl.W; T´w/ and �. Thus, we return to the setup studied earlier and can use
Theorem 6.4, just for Fl.W; T´w/ rather than T´w . Structured uncertainty from the set Fu.W;B1�11 Im/ can
be treated in the same vein. Thus, taking into account that Fu.G1;Fu.G2; �// D Fu.G2 ? G1; �/, where
“?” stands for the Redheffer start product, the uncertain delay setup in Fig. 6.7(a) can be embedded into
the system in Fig. 6.9 under QG D W ? G.
In what follows, we are interested in this setup for W as in the shifted frequency-dependent scaling
model above and
G D�
0 Im
P xD�00
�
which corresponds to the input delay system with a plant P and a nominal delay �0 � 0, cf. (1.22) on p. 9.
In this case
QG D�
0 W˛
Im W~
�
?
�
0 Im
P xD�00
�
D�
0 W˛
P xD�0PW~
xD�0
�
: (6.5)
If a controller K W y 7! u is applied, the closed-loop system T´w D Fl. QG;K/ W w 7! ´ becomes
T´w D W˛.I �KPW~xD�0/�1KP xD�0
: (6.6)
As discussed in ÷4.3.2, the H1 problem for �0 ¤ 0 is particularly simple for a generalized plant as in
(6.5).
6.2.3 Covering models for the uncertain delay element
Having acquired tools to handle problems with unstructured and structured norm-bounded uncertainty,
consider now their use in the robust stability analysis of systems with delay uncertainty in Fig. 6.1. Our
114 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
goal is to embed the uncertain delay element xD�ıinto a family of stable systems, whose frequency re-
sponses lie in disks in the complex plane and include those of xD�ıfor all �ı 2 Œ0; N��. This, in turn, embed
the problem into the robust stability setup in Fig. 6.9.
To construct such families, consider the areas in the complex plane C, which are covered by the
uncertain frequency response e�j!�ı , again for �ı 2 Œ0; N��. At every frequency ! this is a circular arc of
the unit radius, centered at the origin and subtending the angle N�! [rad] in the clockwise direction starting
from 1 (for �ı D 0). Some of these arcs are presented below:
N�! D 0 N�! D 1 N�! D 2 N�! D � N�! D 4 N�! � 2�
1 1 1 1 1(6.7)
The arc reduces to a single point, 1, at ! D 0 and increases with !, until it becomes the whole unit circle
T for all ! � 2�= N� . Our goal is to cover these arcs by tightest disks of the form ~.!/C ˛.!/xD, where
~.!/ 2 C is a center and ˛.!/ � 0 is a radius, at each frequency. This task clearly depends on the choice
of the disk center ~.!/, so consider several possible choices for it below.
Covering with the center in ~.!/ D 0
Perhaps the simplest choice is to place the center to the origin for all frequencies, which yields the closed
unit disk xD as the frequency-independent covering, see Fig. 6.10(a). This implies that the system in Fig. 6.1
for every �ı is covered by the the system in Fig. 6.9 and QG D G. By Proposition 6.5, the system in Fig. 6.9
is robustly stable iff
QG´w 2 H1 and 9M;M�1 2 H1 such that kM QG´wM�1 xD� k1 D kM QG´wM
�1k1 < 1:
Hence, the conditions above also guarantee the stability of the system in Fig. 6.1 for all �ı � ��0, i.e. the
system is delay-independent stable. In the particular case of the standard feedback control system with a
plant P , a controller K, and � D 0, we have that QG´w D PK is the loop gain (cf. (1.25)) and the closed-
loop system is stable for every constant loop delay if loop gain is stable and its peak frequency-response
gain is strictly contractive. In the SISO case, i.e. for m D 1, this recovers Tsypkin’s criterion [72].
Covering with the center in ~.!/ D 1
Delay-dependent conditions can be obtained by choosing ~.!/ ¤ 0. The best studied choice is apparently
~.!/ D 1, which effectively assumes the nominal value �ı D � . The tightest covering disk for this choice
does depend on N� , see Figs. 6.10(b) and 6.10(c). The radii of such disks at each frequency are derived as
˛.!/ D max�ı2Œ0; N��
˚
j1 � e�j!�ı j
D(
j1� e�jN�! j if N�! 2 Œ0; ��j1� e�j� j otherwise
:
Taking into account that j1 � e�jN�!j2 D 2.1� cos. N�!// D 4 sin2. N�!=2/, we end up with ˛.!/ D ˛1. N�!/,where
˛1.!/ ´(
2 sin.!=2/ if ! 2 Œ0; ��2 otherwise
D!
6 dB
� 2�(6.8)
(this formula was apparently first derived by Owens and Raya in [53]). The function ˛1.!/ is mono-
tonically increasing, vanishing as ! # 0. This agrees with the intuition that the uncertainty level of this
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 115
~.!/ 1
(a) ~.!/ D 0
~.!/
˛.!
/
(b) ~.!/ D 1 for N�! D 1
~.!/
˛.!/ D
2
(c) ~.!/ D 1 for N�! D 4
~.!/
˛.!
/
(d) ~.!/ D e�j N�!=2 for N�! D 2
~.!/
˛.!/
(e) ~.!/ D e�j N�!=2 for N�! D 4
~1.!/
~2.!/
(f) disks comparison for N�! D 2
~.!/˛.!
/
(g) ~.!/ D ~3. N�!/ from (6.12) for N�! D 2
~.!/
(h) ~.!/ D ~3. N�!/ from (6.12) for N�! D 4
Fig. 6.10: Covering disks for uncertain delays
element decreases at low frequencies. Moreover, the range of frequencies, in which the uncertainty radius
˛1. N�!/ is small, increases as N� decreases, which is also intuitive.
The use of ˛1 as the uncertainty radius for the system in Fig. 6.9 is hampered by the fact that this is
not a rational function of !, i.e. it cannot be the frequency response of a finite-dimensional system. So
we shall approximate ˛1.!/ by the frequency response of a rational transfer function W˛1.s/ such that
jW˛1.j!/j � ˛1.!/. Possible choices are
W˛1;0.s/ ´ s; W˛1;1.s/ ´ 2p3 s
s C 2p3; and W˛1;3.s/ ´ 2:007 s
s C 2
s2 C 3:695s C 5:56
s2 C 3:026s C 5:56; (6.9)
see Fig. 6.11(a). We can see that the use ofW˛;13 yields a close approximation of ˛1, so further increase of
the approximation order is normally not required. It should be emphasized that jW˛11.j!/j < jW˛10.j!/jfor all ! ¤ 0, which means that the approximation with W˛11 always yields less conservative results than
that with W˛10. At the same time, the gain of W˛13 is not always smaller than that of W˛11, so there
might be situations when the use of this third-order covering leads to more conservative results than for
the first-order covering.
With finite-dimensional and stable W~ and W˛ selected according to
W~.s/ D 1 and W˛.s/ D W˛1. N�s/
116 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
6
10:8
!� 2�
˛1.!/
jW˛1;0.j!/jjW˛1;1.j!/jjW˛1;3.j!/j
(a) ˛1.!/ from (6.8) for ~.!/ D 1
6
10:8
!2p
3 2�
˛1.!/
˛3.!/
jW˛3;1.j!/jjW˛3;3.j!/j
(b) ˛3.!/ from (6.14) for ~.!/ D W~3.j N�!/ from (6.13)
Fig. 6.11: Rational approximations of radii of tightest covering disks
(mind scaling the Laplace variable in W˛), we can embed the robust stability problem for the system in
Fig. 6.1 into the robust stability problem for the system in Fig. 6.9 withW D�
0 IW˛ I
�
. For the problem with
an uncertain loop delay this yields QG as in (6.5), which in the case of W~ D 1 is actually the generalized
plant for the robust stability problem with the input multiplicative uncertainty model discussed in ÷4.3.2.
The controlled system in (6.6) reads then
T´w D W˛.I �KP/�1KP D W˛T;
i.e. it is the weighted complementary sensitivity. Thus, Proposition 6.5 leads to the conclusion that
the system in Fig. 6.1 is stable for all �ı 2 Œ0; N�� if T 2 H1 and there is M;M�1 2 H1 such that
kMTM�1W˛k1 < 1. In the SISO case, this reduces to the condition kW˛T k1 < 1.
With the choice W˛.s/ D W˛1;0. N�s/ D N�s, the condition above reads as the strict contractiveness of
the H1-norm of N�sT .s/. Thus, the delay margin is lowerbounded by the reciprocal of the H1-norm of
sT .s/. Even though this condition is based on the loosest upper bound on ˛1.!/, it is might be useful,
see the proof of Proposition 6.1. The reason is that it is a closed-form expression, independent of N� itself,
unlike conditions produced from the other bounds in (6.9).
Example 6.3. Consider a SISO plant P , whose only unstable pole is located at s D a > 0. The problem
addressed in this example is to find conditions under which a stabilizing K exists such that kW˛T k1 < 1.
This would yield a (conservative) guarantee on the attainable delay margin �d, which is yet another take
on the problem studied in ÷6.1.1. It is not hard to see that T .a/ D �1 for all stabilizing controllers.
Therefore, we have that kW˛T k1 � jW˛.a/j. In fact, in the case of a single unstable pole this bound
can be approached arbitrarily closely by stabilizing controllers, see [13, Sec. 6.1]. Thus, we have that the
system can be robustly stabilized iff
jW˛1. N�a/j < 1 (6.10)
If W˛.s/ D W˛1;0. N�s/ D N�s, then condition (6.10) results in the feasibility of the robust stability problem
iff N� < 1=a. If W˛.s/ D W˛1;1. N�s/ from (6.8), then we have N� < 2.6Cp3/=.11a/ � 1:406=a, while if
W˛.s/ D W˛1;3. N�s/, then all N� < 1:702=a can be attained. All these bounds are more conservative than the
upper bound 2=a from Proposition 6.2, although the conservatism level decreases as the approximation
accuracy increases. O
Covering with the center in ~.!/ D e�j N�!=2
Another seemingly natural choice is to place the center to the middle of the uncertain arc at each frequency,
i.e. at ~.!/ D e�jN�!=2. This effectively assumes the nominal value of the delay at �ı D N�=2. The tightest
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 117
disks for this choice, see Figs. 6.10(d) and 6.10(e), have the radii
˛.!/ D max�ı2Œ0; N� �
˚
je�jN�!=2 � e�j!�ı j
D max�ı2Œ0; N��
˚
j1 � e�j!.�ı�N�=2/j
D(
j1� e�jN�!=2j if N�! 2 Œ0; 2��j1� e�j� j otherwise
:
Thus, ˛.!/ D ˛1. N�!=2/ for ˛1.!/ defined by (6.8) and we can use all its finite-dimensional approxima-
tions from (6.9). It is readily seen that ˛1. N�!=2/ < ˛1. N�!/ for all ! 2 .0; 2�/. Hence, covering disks in
this case have smaller radii than those for ~.!/ D 1 for all N�! 2 .0; 2�/.With the use of
W~.s/ D e�N�s=2 and W˛.s/ D W˛1. N�s=2/
for the input-delay problem, we end up with (6.5), which includes the nonzero loop delay xD�0CN�=2. This
delay can be compensated by the standard MSP though, so its presence does not hamper the design of
controllers minimizing kT´wk1.
Example 6.4. Consider the problem studied in Example 6.3, where a SISO plant has only one unstable
pole at s D a. The delay margin �d D N� is guaranteed if there is a stabilizing K such that kT´wk1 < 1,
where T´w D W˛PK=.1�PW~K/. Because T´w.a/ D jW˛.a/j=jW~.a/j for every stabilizing K, we have
that kT´wk1 � jW˛.a/j=jW~.a/j D e N�a=2jW˛.a/j and this bound can be approached. Hence, the system is
robustly stable iff
e N�a=2jW˛1. N�a=2/j < 1: (6.11)
For the rational approximations of ˛.!/ given by (6.9), inequality (6.11) results in the stability conditions
N� < 1:134=a, N� < 1:259=a, and N� < 1:305=a. The first one of them, corresponding to the cover with
W˛1;0.s/ D s, is less conservative than the corresponding condition in Example 6.3. The second and the
third conditions are actually more conservative, even though the uncertainty disks have smaller radii. O
The outcome of Example 6.4, in that a smaller covering disk might yield a more conservative bound,
might appear counterintuitive at first sight. Yet it is actually not, as explained in [76]. To understand why,
compare the disks for ~.!/ D 1 and ~.!/ D e�jN�!=2 at N�! D 2, which are shown in Fig. 6.10(f). These
disks cover different areas and the smaller disk is not inscribed in the larger one. Consequently, it might
happen that the worst uncertainty ı belongs to the area in the smaller disk, but not in the larger one.
Covering with a center motivated by the smallest covering disks
Neither ~.!/ D 1 nor ~.!/ D e�jN�!=2 results in the smallest disks, i.e. those having the smallest radii,
covering the arcs in (6.7). The smallest disks are obtained by placing the center to the midpoint of the
chord connecting the end points of the arc for every frequency until the arc becomes a semi-circle, and at
the origin afterwards, see Figs. 6.10(g) and 6.10(h). Algebraically, such centers satisfy ~.!/ D ~3. N�!/,where
~3.!/ ´ 1C e�j!
21Œ0;��.!/ D
(
.1C e�j!/=2 if ! 2 Œ0; ��0 otherwise
(6.12)
The radii of such disks are
˛.!/ D max�ı2Œ0; N� �
� ˇˇˇˇ
1C e�jN�!
2� e�j!�ı
ˇˇˇˇ
�
D j1 � e�jN�!j2
D sinN�!2
if N�! 2 Œ0; �� and ˛.!/ D 1 if N�! � � . Thus, we end up with ˛.!/ D ˛1. N�!/=2, where ˛1.!/ is
given by (6.8). Moreover, the resulting disks are always inscribed in the disks corresponding to the choice
˛.!/ D 1. This implies that centering disks in (6.12) would never result in more conservative results than
for disks centered at ˛.!/ D 1.
118 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
However, the use of (6.12) in robust control problems is not easy. What we need is a causal system
W~ , whose frequency response coincides with that of ~.!/. The finite support of ~.!/, which bears some
resemblance to the ideal low-pass filter, makes this problem vain. But motivated by the “optimal” choice
above, consider placing the center at ~.!/ D W~3.j N�!/, where
W~3.s/ D s2 C 12
s2 C 6s C 12; (6.13)
which is the Œ2; 2�-Pade approximant of .1C e�s/=2. The radii of covering disks for this choice are given
in the following result:
Lemma 6.7. The tightest covering disks for ~.!/ D W~3.j N�!/ has the radii ˛.!/ D ˛3. N�!/, where
˛3.!/ ´
s
2 � 218!2 C .12� !2/ˇ.!/
!4 C 12!2 C 144D
!
6 dB
2p
3 2�(6.14)
where
ˇ.!/ D(
.12 � !2/ cos! C 6! sin! if ! 2 Œ0; 2p3�p
!4 C 12!2 C 144 otherwise:
Proof. Without loss of generality assume that N� D 1. Denote
�.�/ ´ je�j!� �W~3.j!/j2 D 2� 218!2 C .12� !2/..12� !2/ cos.!�/C 6! sin.!�//
!4 C 12!2 C 144
for � 2 RC and analyze its maximum for � 2 Œ0; 1�. This is a periodic continuous function of � with
d�
d�D 2!.!2 � 12/.6! cos.!�/C .!2 � 12/ sin.!�//
!4 C 12!2 C 144
and P�.�/ D 0 at every
� D �k ´ 1
!arctan
6!
12� !2C �
!k; k 2 Z:
If !2 < 12, then �0 is a decreasing function of !, starting from 0:5 at ! D 0 and approaching
�=.4p3/ � 0:45 as ! "
p12. In this frequency range �=! � �=
p12 > 0:9, we have that �k > 1 if k is
positive and �k < 0 if k is negative. Thus, so the only �k 2 Œ0; 1� is �0. It can be verified that � D �0 is a
local minimum point. Hence, the maximum, required for calculating ˛.!/, is achieved either at � D 0 or at
� D 1. Direct calculations show that for all ! 2 Œ0; 2p3�we should pick the latter, so that ˛3.!/ D
p
�.1/.
If !2 > 12, then �0 < 0 and �1 is a decreasing function of ! lying in .0; �=.4p3//. Moreover, �k is a
local maximum point for all odd k. Because for all odd k
.12� !2/ cos.!�k/C 6! sin.!�k/ Dp
!4 C 12!2 C 144
is independent of k and �1 2 Œ0; 1� for all ! > 2p3, we may always take ˛3.!/ D
p
�.�1/.
Similarly to the procedure for ~.!/ D 1, it is possible to approximate the function ˛3.!/ by the
magnitude frequency response of rational transfer functions. Two possible choices are
W˛3;1.s/ ´ 2 s
s C 4and W˛3;3.s/ ´ W˛3;1.s/
s2 C 1:92s C 12:781
s2 C 2:393s C 12:781; (6.15)
see Fig. 6.11(b) on p. 116. This first-order approximation, jW˛3;1.j!/j, is tight at both low and high fre-
quencies. This is in contrast to jW˛1;1.j!/j, which is loose at high frequencies. In fact, it can be verified,
numerically, that
W~3.j!/C jW˛3;1.j!/j xD � 1C jW˛1;1.j!/j xD
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 119
for all !. Hence, covering the uncertain delay xD�ıwith disks centered at W~3 and having the radius
jW˛3;1.j!/j is less conservative than covering with those centered at 1 and having the radius jW˛1;1.j!/j.Also, it is readily seen that jW˛3;1.j!/j > jW˛3;2.j!/j for all ! > 0, so the third-order upper bound
is always less conservative than the first-order one in this case. This observation is confirmed by the
example below.
Example 6.5. Returning to the system in Examples 6.3 and 6.4. Now the problem is solved for
W~.s/ D W~3. N�s/ and W˛.s/ D W˛3. N�s/:
By the arguments already used in Example 6.4, the problem is solvable if
jW˛3.j N�a/j < jW~3.j N�a/j:
This yields the conditions N� < 1:691=a and N� < 1:769=a under W˛3 D W˛3;1 and W˛3 D W˛3;1, respec-
tively. These conditions are less conservative than those in both Example 6.3 and Example 6.4, under
compatible dimensions of W˛. In fact, the condition N� < 1:769=a is only about 88% of the upper bound
in Proposition 6.2, which is not bad for a conservative covering. O
6.2.4 Case study
Return to the system studied in ÷6.1.2, where a proportional primary controller was designed for the
plain integrator with a nominal delay �0, see Fig. 6.3. Consider again its delay margin, i.e. the minimal
destabilizing �ı D N� > 0 (for the sake of simplicity, only the increase of the loop delay is addressed
below). But unlike the analysis in ÷6.1.2, where the actual delay margin was calculated, the problem is
now analyzed from the general uncertainty embedding point of view. To this end, consider the system T´w
from (6.6) for
P.s/ D 1
sand K.s/ D � kps
�0.1 � kp/s C kp.1� e��0s/;
where kp 2 .0; 1/. Given stable W~.s/ and W˛.s/, the resulting T´w has the transfer function
T´w.s/ D � W˛.s/e��0s
�0s C 1� e��0s CW~.s/e��0s´ �T0.s/W˛.s/e
��0s;
where �0 ´ �0.1 � kp/=kp > 0. The robust stability condition reads then kT0W˛k1 < 1. As a matter
of fact, the latter condition in this case is equivalent to T0 2 H1 and jT0.j!/j < 1=jW˛.j!/j for all
!. Because this is not a design problem, we can use the actual disk radius function ˛.!/ rather than its
rational approximation jW˛.j!/j, so the condition to verify is
T0 2 H1 and jT0.j!/j <1
˛.!/; 8! 2 R: (6.16)
A brute-force search over a chosen frequency grid can then be used to check the second condition above.
It can be shown, via the use of the Poisson integral formula, that an H1 function, actually outer, W˛.s/
exists such that jW˛.j!/j D ˛.!/, see [56, Eqn. (1.7)]. Hence, the replacement above is well justified.
First, consider the covering under W~.s/ D 1. With this choice the delay is eliminated from T0.s/,
yielding the rational
T0.s/ D 1
�0s C 1;
which is clearly stable for all kp 2 .0; 1/. With ˛.!/ D ˛1. N�!/ defined by (6.8), condition (6.16) reads
1p1C �2
0!2<1
2
(
1= sin. N�!=2/ if N�! 2 Œ0; ��1 otherwise
120 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
0.5 0.67 0.75 0.87 0.91 1
0
0.67
1.41
2.5
Fig. 6.12: Normalized delay margins obtained via embedding
In fact, because the left-hand side above is a strictly decreasing function of !, we may only check the
condition in the frequency range N�! 2 Œ0; ��. The stability condition is then
2 sin. N�!=2/ <p1C �2
0!2; 8N�! 2 Œ0; ��
or, equivalently,
N� < min!2Œ0;
p3=�0�
2
!arcsin
p1C �2
0!2
2D min
Q!2Œ0;p
3�
2�0
Q! arcsin
p1C Q!2
2� 1:4775 �0:
Thus, we end up with the condition
N��0
< 1:47751 � kp
kp;
which is easy to verify. This bound is presented in Fig. 6.12 by cyan-blue line (the actual margin from
Fig. 6.4(c) is shown by the dashed black line now). It is quite conservative for kp < 0:75, but becomes
very accurate after the first jump at kp � 0:749. However, the approach does not capture the discontinuity
of the actual delay margin in this case.
Now, let us choose W~.s/ D W~3. N�s/ from (6.13). In this case
T0.s/ D 1
�0s C 1� .1�W~. N�s//e��0sD 1
�0s C 1 � V. N�s/e��0s;
where V.s/ ´ 6s=.s2 C 6s C 12/. This transfer function is actually stable for all admissible parameters.
Indeed, this transfer function can result from the feedback interconnection of 1=.�0sC 1/ and V. N�s/e��0s .
Both these transfer functions are stable and have contractive frequency responses. Moreover, the first one
has the unit magnitude only at ! D 0, whereas the second—only at N�! D 2p3 > 0. Hence, the loop gain
is strictly contractive at all frequencies and T0 2 H1 for all �0 � 0 and N� > 0 by the Small Gain Theorem.
The second condition of (6.16) is more involved that that for the choice ~.!/ D 1 and there is probably
no analytic solution to it. Still, it is readily verified numerically, resulting in the bound shown in Fig. 6.12
by red solid line. It is visibly less conservative than the previous bound for kp < 0:749 and is virtually
indistinguishable from the precise margin for kp > 0:749. It also does capture the discontinuity of the
delay margin.
As a matter of fact, the small-gain arguments used to prove the stability of T0 above can also be applied
in the case when the actual ~3 from (6.12) (or, more accurately, its outer extension) is used instead ofW~3.
The resulting bound is shown by the red dotted line in Fig. 6.12. This curve is almost the same as the solid
red line, except for the interval kp 2 .0:673; 0:749/.
6.2. EMBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 121
6.2.5 Time-varying delays
Time-varying delays can also be embedded into the general robustness setup in Fig. 6.9, now with a po-
tentially time-varying uncertainty element ı 2 B1�11 . As already discussed at the beginning of ÷6.2.2, by
this unit ball we understand then the set of all stable systems, whose L2.R/-induced norm is contractive.
Covering time-varying delays is a more delicate, and less visual, task. The fact that the time-varying
delay operator xD�.t/ might be unstable, see Remark 1.2 on p. 3, adds even more complications. Nonethe-
less, covering norm-bounded uncertainty sets can be found and the following two results, which should
be apparently attributed to [19, ÷8.6.1], are instrumental toward this end.
Lemma 6.8. If �.t/ 2 Œ0; N�� and j P�.t/j � � < 1, then xD�.t/ is a bounded operator on L2.RC/, with
k xD�.t/kL2.RC/!L2.RC/ D 1p1 � �
> 1:
Proof. Let �.t/ ´ t � �.t/. Because P�.t/ � 1 � � > 0, the function �.t/ ´ ��1.t / is well defined with
t � �.t/ � t C N� and 0 < P�.t/ � 1=.1� �/. Now, if w D xD�.t/´, then we have that
kwk22 D
Z 1
0
Œ´.t � �.t//�2dtˇˇˇtD�.s/
DZ 1
��.0/
Œ´.s/�2 P�.s/ds DZ 1
0
Œ´.s/�2 P�.s/ds
� 1
1� �
Z 1
0
Œ´.s/�2ds D 1
1� � k´k22
(the facts that t D �.s/ ” s D �.t/ and ´.t/ D 0 whenever t < 0 were used). Now it only remains to
prove that the bound is tight. To this end, let
´.t/ D 1Œ0;.1=��1/ N��.t / and �.t/ D(
� t if 0 � t � N�=�N� otherwise
;
for which w.t/ D 1Œ0; N�=��.t / and
kwk22
k´k22
D N�=�.1=� � 1/ N� D 1
1� � :
This is what we need.
The second result was also independently shown in [25]:
Lemma 6.9. If �.t/ 2 Œ0; N��, then the system � N� W ´ 7! w such that
w.t/ DZ t
t��.t/
´.s/ds DZ 0
��.t/
´.t C s/ds
is a bounded operator on L2.RC/ with k� N�kL2.RC/!L2.RC/ D N� .
Proof. By the Cauchy–Schwarz inequality,
w2.t / D�Z 0
��.t/
´.t C s/ds
�2
��Z 0
��.t/
12ds
��Z 0
��.t/
´2.t C s/ds
�
D �.t/
Z 0
��.t/
´2.t C s/ds
� N�Z 0
�N�´2.t C s/ds
for every t , so that (mind that it is assumed that ´.t/ D 0 whenever t < 0)
kwk22 �
Z 1
0
N�Z 0
�N�´2.t C s/dsdt D N�
Z 0
�N�
Z 1
0
´2.t C s/dtds D N�Z 0
�N�k´k2
2ds
Thus, kwk22 � N�2k´k2
2. This bound is tight because in the particular case of �.t/ � N� the system � N� is
actually the LTI system with the transfer function .1� e�N�s/=s, whose H1-norm equals N� .
122 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
Lemma 6.8 is readily applicable to the problem of finding stability conditions under uncertain varying
delays assumed to satisfy j P�ı.t /j � � for a given 0 � � < 1. The procedure is almost identical then to
the use of disks with centers at the origin in the LTI case, like that in Fig. 6.10(a). The only deviation is
a different scaling, i.e. we need the static W˛ D 1=p1 � � � 1 instead of W˛ D 1 now. This would lead
to delay-independent results in terms of the size of �ı.t /, requiring QG´w D 1=p1 � �KP xD�0
to be stable
and strictly contractive, which reads kPKk1 <p1� � � 1.
The use of Lemma 6.9 is also similar to the LTI case. To see this, observe that the operator � N� defined
in the lemma can be viewed as 1 � xD�.t/ acting on the integrated signal ´. In other words,
� N� D Fl
��
�Gint 1
Gint 0
�
; xD�.t/
�
;
where Gint is the integrator element having the transfer function Gint.s/ D 1=s. But this implies that
xD�.t/ D Fu
��
0 G�1int
1 1
�
; � N�
�
D Fu
��
0 W˛
1 W~
�
; � N�= N��
;
where W~.s/ D 1 and W˛.s/ D N�s, exactly as the approximation of ˛1 for the very same W~.s/ D 1, see
(6.8), by the rational bound W˛1;0.s/ from (6.9). Because � N�= N� 2 B1 by Lemma 6.9, we have that
xD�.t/ 2 Fu
��
0 W˛
1 W~
�
;B1
�
; 8�.t/ 2 Œ0; N��
and the rest is by now standard. For example, this means that the result of Example 6.3 under this W˛,
which says that any system whose only unstable pole is at s D a is stable for all �ı � N� if N� < 1=a, remains
valid for arbitrary time-varying �ı.t /. In fact, it should be less conservative in the time-varying case.
6.2.6 Beyond simple coverings
So far, only approaches directly leading to (scaled) Small Gain Theorem applications have been discussed.
Advantage of this kind of results are their relative simplicity and suitability for off-the-shelf design meth-
ods, like the H1 optimization. However, they might result in rather conservative results. Conservatism
can be expected to be reduced if more sophisticated covering options are used.
One trick toward this end is the use of multiple covering areas. To grasp the idea, let us take a look
at Fig. 6.13. It illustrates covering uncertain delays by two convex regions, a disk and a half-plane. The
darkest region in each of these plots is the tightest convex region in C containing uncertain constant delays,
so we may expect it to be less conservative that each of the covering methods in Fig. 6.10. Both involved
areas, the unit disk and a half-plane, can be generated by quadratic forms in the frequency domain and
0 1
(a) N�! D 1
0 1
(b) N�! D 2
0 1
(c) N�! D 4
0 1
(d) N�! � 2�
Fig. 6.13: Covering uncertain delays by multiple areas
6.3. ANALYSIS BASED ON LYAPUNOV–KRASOVSKII METHODS 123
thus falls into the general IQC (integral quadratic constraint) framework introduced in [36]. The use of
integral quadratic constraints in the context of the analysis of the robustness of delay systems can be found
in [26], where various bounds are found and the conservatism is shown to be reduced. However, IQCs,
and even more so those with irrational weights, are not quite easy to incorporate into design procedures.
Still, some approaches of doing that are available, see [74] and the references therein.
6.3 Analysis based on Lyapunov–Krasovskii methods
Lyapunov’s direct method studied in Section 2.2 can also be used to analyze the robustness of systems to
delay uncertainty. The literature on this subject is monstrous, with a great variety of results and not always
clear differentiation between them. Both Lyapunov–Krasovskii and Lyapunov–Razumikhin functions are
used. Below only a flavor of the use of the Lyapunov–Krasovskii approach is provided.
Consider the DDE
Px.t/ D Ax.t/C A�x.t � �/; Mx0 D 0 (6.17)
for A 2 Rn�n and A� 2 R
n�n. This is a version of (2.18) under zero initial conditions. Our goal is to
characterize conditions under which this DDE is stable for all � 2 Œ0; N�� for some N� � 0. A typical modus
operandi is to transform the system equation into a Lyapunov–Krasovskii-friendly form, then choose a
structure of the Lyapunov–Krasovskii functional, and then analyze its derivative along the trajectories of
the system via upper-bounding cross terms. Each one of these stages has a plenty of options to play with.
The model transformation stage appears to be abandoned in more modern treatments, but exploring that
direction goes beyond the scope of this text. The choice of an appropriate Lyapunov–Krasovskii functional
is normally a tradeoff between treatability and conservatism. The general functional of the form (2.19)
might be too much to digest, it does not appear to lead to manageable robustness formulations. Bounding
cross-terms might not be quite intuitive, but this is where several clever and helpful tricks were introduces.
So we start with rewriting DDE (6.17) in the form
Px.t/ D .AC A�/x.t/ � A�
�
x.t/ � x.t � �/�
D .AC A�/x.t/ � A�
Z 0
��
Px.t C s/ds: (6.18)
It turns out to be advantageous (the idea is from [15]) to rewrite this equation in descriptor form
�
I 0
0 0
�
„ ƒ‚ …
QE
�
Px.t/Py.t/
�
„ ƒ‚ …
PQx.t/
D�
0 I
AC A� �I
�
„ ƒ‚ …
QA
�
x.t/
y.t/
�
„ ƒ‚ …
Qx.t/
��
0
A�
�
„ ƒ‚ …
QB
Z 0
�h
y.t C �/d�
by adding the auxiliary variable
Px.t/ µ y.t/ D�
0 I�
„ ƒ‚ …
QC
Qx.t/:
Choose now a Lyapunov–Krasovskii functional of the form
V.t/ D V1.t /C V2.t / ´ Qx0.t /P 0 QE Qx.t/CZ 0
��
Z t
tCs
y0.r/Ry.r/drds
for R > 0 and
P D�P1 0
P2 P3
�
; P1 > 0: (6.19)
124 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
Note that Qx0.t / QEP Qx.t/ D x0.t /P1x.t/, so that V is indeed positive function of the state .x; Myt/. The
derivatives are
PV1.t / D 2 Qx0.t /P 0 QE PQx.t/ D 2 Qx0.t /P 0�
QA Qx.t/ � QBZ 0
��
y.t C s/ds
�
D Qx0.t /�
P 0 QAC QA0P�
Qx.t/ � 2Z 0
�h
Qx0.t /P 0 QBy.t C s/ds
and, using the Leibniz integral rule (2.20),
PV2.t / DZ 0
��
�
y0.t /Ry.t/� y0.t C s/Ry.t C s/�
ds D � Qx0.t / QC 0R QC Qx.t/ �Z 0
��
y0.t C s/Ry.t C s/ds:
Taking into account that � � N� , we have:
PV .t/ � Qx0.t /�
P 0 QAC QA0P C N� QC 0R QC�
Qx.t/�Z 0
��
y0.t C s/Ry.t C s/ds � 2Z 0
��
Qx0.t /P 0 QBy.t C s/ds:
The last term in the right-hand side above is the cross term to be bounded. To handle that, note that for all
Q > 0 and vectors v1 and v2, we have that 0 � .v1CQ�1v2/0Q.v1CQ�1v2/ D v0
1Qv1Cv02Q
�1v2C2v02v1
or, equivalently, that
� 2v02v1 � v0
1Qv1 C v02Q
�1v2: (6.20)
Thus
�2Z 0
��
Qx0.t /P 0 QBy.t C s/ds �Z 0
��
�
Qx0.t /P 0 QBQ�1 QB 0P Qx0.t /C y0.t C s/Qy.t C s/�
ds
� N� Qx0.t /P 0 QBQ�1 QB 0P Qx0.t /CZ 0
��
y0.t C s/Qy.t C s/ds;
which is true for all Q > 0, in particular, for Q D R. Thus,
PV .t/ � Qx0.t /�
P 0 QAC QA0P C N� QC 0R QC C N�P 0 QBR�1 QB 0P�
Qx.t/
and PV .t/ < 0 for all Qx ¤ 0 if
P 0AC A0P C N� QC 0R QC C N�P 0 QBR�1 QB 0P < 0:
Equivalently, using (A.6) we can end up with the condition that PV .t/ < 0 if the LMI
�
P 0 QAC QA0P C N� QC 0R QC N�P 0 QBN� QB 0P �N�R
�
< 0 (6.21)
is solvable in 0 < R 2 Rn�n and P 2 R
2n�2n as in (6.19). This condition is readily verifiable for a fixed N�and can be used as a base for a parametric search over the scalar N� to maximize the delay margin.
Remark 6.3 (connections with embedding). Equation (6.17) can be presented as the feedback intercon-
nection of the uncertain delay xD� with a system having the transfer function C´.sI � A/�1Bw , where C´
and Bw are any matrices in the rank decomposition A� D BwC´. Choosing now covering with W~.s/ D 1
and W˛.s/ D N�s, the robust stability problem converts to the form presented in Fig. 6.8(b) for
T´w.s/ D Fl
��
0 N�sII I
�
; C´.sI � A/�1Bw
�
D N�sC´.sI � A � BwC´/�1Bw
D N�C´Bw C N�C´.sI � A � A�/�1.AC A�/Bw ;
6.3. ANALYSIS BASED ON LYAPUNOV–KRASOVSKII METHODS 125
which is a standard system. We can then apply Propositions 6.5 and 6.6 to end up with the LMI condition
�
.AC A�/0X CX.AC A�/C N�2C 0
´YC´ X.AC A�/Bw C N�2C 0´YC´Bw
B 0w.AC A�/
0X C N�2B 0wC
0´YC´ N�2B 0
wC0´YC´Bw � Y
�
< 0; (6.22)
which is simpler, i.e. has less decision variables with X 2 Rn�n and Y 2 R
rank A� � rank A� , than (6.21). But
now ignore the possibility to exploit a potential rank deficiency of A� and choose C´ D I and Bw D A� .
Also, consider an alternative, descriptor, representation of T´w above, namely
T´w.s/ D N��
0 I�
�
s
�
I 0
0 0
�
��
0 I
AC A� �I
���1 �
0
A�
�
D N� QC.s QE � QA/�1 QB;
which can be verified by straightforward algebra. It can then be shown, using the LMI characterization of
theH1 norm of descriptor systems from [34], that finding whether there isM such that kMT´wM�1k1 <
1 is equivalent to solving the very LMI (6.21). In other words, (6.22) is merely a more economical version
of (6.21). It should also be emphasized that the covering disks chosen in the derivation of (6.22) are not
the tightest ones. O
Note that inequality (6.20) can be made less conservative, e.g. if replaced with Park’s inequality [55]
�2v02v1 �
�
v01 v0
2
��
Q QS
S 0Q .I C S 0Q/Q�1.I CQS/
� �
v1
v2
�
;
which follows from (6.20) via the substitution v1 ! v1 C Sv2. The additional variable S , whose choice
as S D 0 gives (6.20), can be used to tighten the bound. Then, LMI conditions resulting from such a
replacement, can again be shown to be equivalent to those resulting with the embedding approach [81].
There are then more directions for refining the analysis using Lyapunov–Krasovskii functionals, like
the use of functionals with more design parameters, exploiting Jensen’s, Wirtinger’s, Bessel’s inequalities,
et cetera, see surveys [64, 82] and the references therein. Still, the vast majority of these methods apply
only to the analysis problem, where a controller, actually a finite-dimensional one, is given and the stability
is verified. But in such situations, it might be easier to use graphical tests, like the Nyquist criterion,
to estimate the delay margin in a system. The real need in analytical methods is in design problems,
where a controller is to be constructed to satisfy certain performance requirements, including robustness.
It appears that Lyapunov-based methods are not there yet. They also appear to be less transparent in
revealing conservatism sources than methods based on embedding delay uncertainty into general-purpose
robustness problems.
126 CHAPTER 6. ROBUSTNESS TO DELAY UNCERTAINTY
Chapter 7
Exploiting Delays
THE DELAY ELEMENT HAS SOME FAVORABLE PROPERTIES , which might be exploited in certain sit-
uations. A handful of those ideas are outlined in this chapter. It should be emphasized that the delay
element has very rich dynamics. For that reason, possible side effects of its use, which might not be im-
mediately visible, should be taken into account. It is therefore well advised to be extra cautious in using
such methods, especially when delays are introduced into feedback loops.
7.1 Dead-beat open-loop control
Apparently the safest use of delays is in generating or processing reference signals, as these operations are
done outside of feedback loops and are therefore a relatively safe matter, in the worst case only accuracy is
lost, not stability. So we start with exposing several such methods, assuming scalar inputs for simplicity.
7.1.1 Posicast control
Consider a cart with a pendulum mounted on it, which is capable of moving in one dimension, say along
the x-axis. The control input is supposed to be the position of the cart x.t/ itself. The problem is to move
the cart from its equilibrium at x D 0 to another equilibrium, say at x D xf ¤ 0, quickly, but without
exciting oscillations in the pendulum. This problem can be motivated by the task of moving a gantry crane
with a payload from a pick-up point to a drop-off point without causing oscillations of the payload.
The posicast control strategy1 for such a system, proposed by Otto J. M. Smith in [68], is illustrated
by the control sequence depicted in Fig. 7.1. It assumes that the cart movements do not apply longitudinal
forces to the pendulum end (i.e. a linearized behavior) and that there is no friction. The cart first leaps half
1It abbreviates the “positive-cast,” because anglers drop their baits in the water at the maximum-position-zero-velocity instant.
0 xf
(a) t < 0
0 xfxf=2
(b) t D 0
0 xfxf=2
(c) 0 < t < Tp=2
0 xf
(d) t � Tp=2
Fig. 7.1: Posicast control of a cart with a pendulum with the period Tp
127
128 CHAPTER 7. EXPLOITING DELAYS
a way, to the point xf=2, as shown in Fig. 7.1(b), and then waits there for a half of the pendulum period.
During this wait time the pendulum swings forth to the rightmost point, whose angle coincides with that
at the beggining of the swing and thus having its x-axis position exactly at xf with the zero velocity, see
Fig. 7.1(c). At this moment the cart leaps to the destination point xf, where the pendulum already stands
still, thus bringing the whole system to its required equilibrium. This sequence of actions corresponds to
the control signal
x.t/ D�
1.t /C 1.t � Tp=2/�
xf=2 D0
xf
Tp=2
;
where Tp denotes the pendulum period. In other words, the solution is the response of the open-loop
controller
Kpcast.s/ D 1C e�Tps=2
2; (7.1)
connected in series with the plant, to the scaled step xf1.t /.
The intuition above can be expressed in control-theoretic terms as well. Assuming that the pendulum is
a point mass located at the distance l from its pivot point, the linearized model of the system in Fig. 7.1(a)
from the cart position x to the pendulum angle � is
P.s/ D s2
ls2 C g;
where g is the standard gravity. Its poles at s D ˙jp
g=l cause undamped oscillations if excited. Hence,
we should avoid exciting them by a required maneuver. The only way to prevent such oscillations in
the open-loop architecture is to cancel these poles by zeros of the controller (ignoring internal instability
problems produced by this for a while). This can be done in various ways, one of which is the posicast
controller in (7.1). Indeed, its zeros are at j.2i C 1/�=Tp for all i 2 Z. The period of the pendulum is
Tp D 2�p
l=g, so these zeros include the poles of P.s/. In fact, the function P.s/Kpcast.s/ has no poles
and its step response
�.t/ D L�1
�s.1C e�Tps=2/
2.ls2 C g/
�
D 1
2lcos
�2�
Tpt
�
1Œ0;Tp=2�.t / D0
Tp=2
has a finite duration, known as being dead-beat.
The approach is readily extendible to damped pendulums, with transfer functions of the form
P.s/ D s2
ls2 C 2cs C g(7.2)
for some c2 < gl (i.e. the response if assumed to be underdamped). This transfer function has a pair of
stable poles at s D �� ˙ j!, where
� ´ c
land ! ´
r
g
l� �2 D 2�
Tp
; (7.3)
where Tp is still the period of the pendulum and � can be interpreted as the decay ratio of its oscillations.
We consider a controller of the form
Kpcast.s/ D �0 C �1e��s; (7.4)
with the goal to design its three real parameters �0, �1, and � , guaranteeing that Kpcast.0/ D 1, to have the
required steady-state x, and Kpcast.�� ˙ j!/ D 0, to have an FIR PKpcast, whose oscillations vanish in
finite time. The first requirement yields �0 C �1 D 1 and the second one reads
�0 C �1e�� e�j�! D �0 C �1e�� cos.�!/� j�1e�� sin.�!/ D 0:
7.1. DEAD-BEAT OPEN-LOOP CONTROL 129
Obviously, �1 ¤ 0, so the condition above is solvable in real variables iff sin.�!/ D 0 or, equivalently, iff
� D k�=! for some k 2 Z. It is justified to choose the smallest � solving this equality. The choice � D 0
yields �0 C �1 D 0, which contradicts the requirement on the static gain above. The next smallest
� D �
!D Tp
2;
which yields the following linear relation for �0 and �1:
�
1 1
1 �e��
� �
�0
�1
�
D�
1
0
�
H)�
�0
�1
�
D�
e�Tp=2
1
�1
1C e�Tp=2:
With these choices, the controller has the transfer function
Kpcast.s/ D e�Tp=2 C e�Tps=2
1C e�Tp=2(7.5)
and results in the following step response of PKpcast:
�.t/ D 1
1C e��Tp=2
ˇ
le��t cos
�2�
Tpt C arccos
1
ˇ
�
1Œ0;Tp=2�.t / D0
Tp=2 ;
where ˇ ´p
1C .�Tp/2=.2�/2 D 1=
p
1� c2=.gl/ � 1. Note that �0 > �1 for all � > 0, i.e. the first
leap of the cart is always larger than the second one for a damped pendulum. The overall time required
for settling at the required equilibrium also increases if the damping is nonzero. This is intuitive, taking
into account that the period of the pendulum increases with damping.
Of course, in realistic situations we should not expect the match between zeros of the controller and
poles of the plant to be perfect. The response is no longer FIR then, there are residual oscillations of the
pendulum. For example, assume that the period of the pendulum Tp is known, but its decay ratio � used in
the controller design is different from the actual Q� , which is realistic. In this case the residual oscillations,
which can be derived via the residues of the response P.s/Kpcast.s/=s at s D �Q� ˙ j2�=Tp, are
�res.t / D 1� e.Q���/Tp=2
1C e��Tp=2�step.t /;
where �step.s/ is the step response of the uncontrolled P . The maximum amplitude of this response decays
linearly with the mismatch j Q� � � j if the latter is sufficiently small.
7.1.2 Generating continuous-time FIR responses by a chain of delays
The problem of selecting parameters of the posicast controller in (7.4) can be seen as particular cases of a
more general problem of finding a chain of � delay elements,
K D�
X
iD0
�ixD�i; 0 D �0 < �1 < � � � < ��; (7.6)
for a given finite-dimensional P so that PK is an FIR system, whose impulse response has support in
Œ0; ���. The problem may be considered for a given delay sequence f�ig, when only parameters �i 2 Rm�q
are to be found, or in a completely general setting, when delays can also be chosen freely. Sometimes,
additional constraints on K, e.g. on its static gain K.0/, are incorporated. Such constraints normally also
rule out the trivial solution �i D 0, for all i 2 Z0::� .
The following result characterizes all solutions to this problem.
130 CHAPTER 7. EXPLOITING DELAYS
Lemma 7.1. If the realization P.s/ D D C C.sI � A/�1B is minimal and K is as in (7.6), then PK is
FIR iff�
eA��B � � � eA.������1/B B�
˚ D 0; (7.7)
where ˚ ´�
�00 � � � �0
��1 �0�
� 0, and then the impulse response of PK is zero for all t > �� .
Proof. Decompose
P�ixD�i
D�
�����i
˚
P
C OPixD����i
�
�ixD�i
D �����i
˚
P
�ixD�i
C OPi�ixD��;
where the truncation operator is defined in ÷3.4.2 on p. 60 and OPi .s/ D C.sI �A/�1eA.����i /B , cf. (3.39).
Hence,
PK D�
X
iD0
�����i
˚
P
�ixD�i
C�
X
iD0
OPi�ixD��:
The first term in the right-hand side above is an FIR system, whose impulse response has support in Œ0; ���.
The second term is a ��-delayed system. If its finite-dimensional partP
iOPi�i ¤ 0, the impulse response
of this system has support in .��;1/ and PK cannot be FIR. Thus, the FIR property holds iff
�X
iD0
OPi.s/�i D�A
P�iD0 eA.����i /B�i
C 0
�
D 0:
Condition (7.7), which readsP�
iD0 eA.����i /B�i D 0, is then clearly sufficient and its necessity follows
by the observability of .C; A/.
If P is n-dimensional, then there are nontrivial solutions to (7.7) iff the rank of
M� ´�
eA��B � � � eA.������1/B B�
2 Rn��C1
is smaller than �C1. In other words, there must exist a nontrivial kernel ofM� and then every ˚ 2 kerM�
is admissible. This condition holds true whenever � � n, which is a sufficient condition for the existence
of a required K. Nevertheless, there might also be situations when (7.7) is solvable by ˚ ¤ 0 even if
� < n.
To see that, return to the plant with the transfer function (7.2). Its minimal realization is
P.s/ D 1
l
2
4
�� ! 0
�! �� 1
�2=! � ! �2� 1
3
5 :
Choosing � D n D 2 we obtain
M� D�
e��2� sin.�2!/ �e�.�2��1/� sin..�2 � �1/!/ 0
e��2� cos.�2!/ e�.�2��1/� cos..�2 � �1/!/ 1
�
:
Thus, unless sin.�2!/ D sin..�2 � �1/!/ D 0, this is a rank-two matrix with
kerM� D Im
2
4
sin..�2 � �1/!/
�e��1� sin.�2!/
e��2� sin.�1!/
3
5 H)
2
4
�0
�1
�2
3
5 D
2
4
sin..�2 � �1/!/
�e��1� sin.�2!/
e��2� sin.�1!/
3
5˛
for an arbitrary ˛ ¤ 0. If we pick �1 D �=! now, then sin.�1!/ D 0 and we have that �2 D 0, i.e. that
effectively � D 1. Because with this choice sin..�2 � �1/!/ D � sin.�2!/, we then recover the posicast
controller (7.5) under ˛ D � csc.�2!/=.1C e���=!/.
7.1. DEAD-BEAT OPEN-LOOP CONTROL 131
If in the example above �1 is such that sin.�1!/ ¤ 0, then two delays are used and the end point
can be reached faster. For example, let �1 D �=.4!/ and �2 D �=.2!/. In this case �0 D ˛=p2,
�1 D �e���=.4!/˛, and �2 D e���=.2!/˛=p2. The condition K.0/ D 1 reads then
˛ Dp2
1 �p2e���=.4!/ C e���=.2!/
Dp2
.1 � e���=.4!//2 C .2 �p2/e���=.4!/
:
To simplify the expressions, assume the undamped case, i.e. that � D 0. The controller is then
K.s/ D�
1C 1p2
��
1 �p2 e�Tps=8 C e�Tps=4
�
and its step response x.t/ � 1:707 � 1.t /� 2:414 � 1.t � Tp=8/C 1:707 � 1.t � Tp=4/. Thus, the cart position
first overshoots by about 71% at t D 0, then undershoots by about 71% at t D Tp=8, and finally settles at
the steady state value at t D Tp=4. This produces the following response of the pendulum:
�.t/ D 2Cp2
2l
�
cos
�2�
Tpt
�
1Œ0;Tp=8�.t / � sin
�2�
Tpt
�
1ŒTp=8;Tp=4�.t /
�
D0
1
2l
Tp=4:
which settles twice as fast as in the posicast controller case. The price is higher amplitudes of both the
cart and the pendulum.
7.1.3 Input shaping
The idea behind posicast control became popular in numerous input shaping schemes, where the control
input is the convolution of a reference signal, like a step, with shaping imulses, see [66] for an overview
of related techniques. Those shaping impulses can be seen as generated by an array of delays, exactly as
in (7.6). Thus, the posicast controller (7.5) can be seen as a special case of an input shaper.
Apart from canceling nominal lightly damped poles, input shapers are also endeavoured to be less
sensitive to the location of those poles, i.e. be robust. There are several approaches to improve robustness,
normally at the expense of a more involved controller and a longer settling to the final equilibrium. They
thus typically use an increased number of delay elements in the controller, like (7.6) for � D 2 for plant
(7.2) instead of � D 1 as in (7.4). The extra parameters are then used to bring in additional properties,
reducing the sensitivity of the scheme to uncertain resonance parameters.
As an example of robustifying ideas, consider the ZVD (zero vibration and derivative) approach.
Its key idea is to require from the canceling zeros of the transfer function of the controller to have a
higher multiplicity than that of the canceled plant poles. To illustrate this approach, return to the damped
pendulum (7.2) and consider the choice of parameters in controller (7.6) under � D 2 for it. Additional
zeros at s D �� ˙ j! imply the requirement K 0.�� ˙ j!/ D 0, where K 0.s/ ´ dK.s/=ds. This
requirement can be accounted for either explicitly, via adding the constraint
�
0 ��1e�1�e�j�1! ��2e�2� e�j�2!�
˚ D 0
to M�˚ D 0, or implicitly, via solving the problem for the fourth-order P.s/=.ls2 C 2cs C g/. In either
case, the problem is solvable iff sin.�1!/ D sin.�2!/ D 0. For instance, the solution for �1 D �=! and
�1 D 2�=! is
KZVD.s/ D�
e�Tp=2 C e�Tps=2
1C e�Tp=2
�2
: (7.8)
132 CHAPTER 7. EXPLOITING DELAYS
The step response of PKZVD is then
�.t/ D 1
.1C e��Tp=2/2
ˇ
le��t cos
�2�
Tp
t C arccos1
ˇ
��
1Œ0;Tp=2�.t / � 1ŒTp=2;Tp�.t /�
D0
Tp ;
where ˇ ´p
1C .�Tp/2=.2�/2. This algorithm takes twice the time required by the standard posicast to
move to the final point. What we gain is a lower sensitivity to modeling inaccuracies. To see that, assume
again that the decay ratio is uncertain, namely, its value � used to calculate (7.8) is different from its actual
value Q� . In this case the poles at s D �Q� ˙ j! are not canceled and there are residual oscillations. For
controller (7.8) they are
�res.t / D�1 � e.Q���/Tp=2
1C e��Tp=2
�2
�step.t /;
where �step.s/ is the step response of the uncontrolled P . Their maximum amplitude decays now quadrat-
ically with the mismatch for small deviations, i.e. as . Q� � �/2. Such a function has a slower grow than a
linear function of the error and is thus less sensitive to uncertainty.
Another possible direction of improving the robustness of input shaping algorithms is to place zeros
of controller (7.6) not exactly at each lightly-damped pole of the nominal plant, but rather at two different
points nearby. For example, we may consider the problem for
P.s/ D s2
.ls2 C 2.c C cı/s C g/.ls2 C 2.c � cı/s C g/
for a small cı > 0. By doing this, we sacrifice the nominal response, which will exhibit small residual
oscillations, but can guarantee a low gain of the controller in a wider neighborhood of this lightly-damped
pole.
Input-shaping ideas extend to systems with multiple flexible modes. Multi-mode shapers can be con-
structed either via the series interconnection of input shapers for each of those modes or by choosing
parameters of (7.6) directly. The former approach is simpler from the design point of view, as each factor
can be chosen to be of standard forms, like (7.5) or (7.8). The latter approach has a potential advantage
in that it may produce more economic controllers, with faster responses. But parameters of (7.6) is typ-
ically harder to tune to satisfy (7.7) for general systems. State-of-the-art methods appear to use various
numerical optimization schemes.
Remark 7.1 (why dead-beat). An intriguing question is why input-shaping solutions endeavor to reach a
dead-beat response from a controlled signal? In principle, lightly-damped modes can also be canceled by
finite-dimensional controllers. For example, we can use
K.s/ D ls2 C 2cs C g
g.T s C 1/2
for some T > 0 or whatever other denominator polynomial, whose order is at least 2, instead of (7.5), to
control a system with the transfer function (7.2). This would yield P.s/K.s/ D s2=.g.T s C 1/2/, whose
step response decays aperiodically, with a faster decay for a smaller T . Such a controller might be easier
to design, especially in the multi-mode case, and it might be less sensitive to modeling mismatches. But
a finite-dimensional K normally has a slower settling under comparable magnitudes of involved signals.
Still, it appears that a comprehensive comparison of these approaches is yet to be carried out. O
7.1.4 Time-optimal control
Another related approach to generate reference signals, in terms of the structure of resulted signals, is the
use of time-optimal strategy under constraints on the various process variables. Specifically, this is the
7.1. DEAD-BEAT OPEN-LOOP CONTROL 133
case for the problem of steering the state of the n-dimensional system
Px.t/ D Ax.t/C Bu.t/
from a given initial condition x.0/ D x0 to a given final state x.tf/ D xf in minimum possible time tfunder the constraint that ju.t/j � 1 for all t . We assume that both x0 and xf are admissible equilibria of
the system, i.e. are such that Ax0 C Bu0 D Axf C Buf D 0 for some ju0j � 1 and jufj � 1. This problem
is well defined iff .A; B/ is controllable and the optimal solution is of the bang-bang form, i.e. the optimal
u.t/ takes values only from the set f�1; 1g, with a finite number of switches between u.t/ D �1 and
u.t/ D 1, see [3, Sec. 6.5] for details. Moreover, if all eigenvalues of A are real, then the number of
switches is upperbounded by n� 1.
It is readily seen that the bang-bang strategy for t � 0 can be expressed as the step response of (7.6)
for j�0j D 1 and �i D .�1/i2�0 for all i 2 Z1::��1. We can thus still use (7.7), now for effectively a given
˚ and the delays �i as the parameters to determine, together with the constraint
lims!0
.sI � A/�1.sx0 C BK.s// D xf;
which guarantees that limt!1 x.t/ D xf.
To illustrate this procedure, consider the time-optimal control of a DC motor connected to a rigid
mechanical load, having the moment of inertia J and the viscous friction coefficient f . Its dynamics can
be described as � P�m.t /
P!m.t /
�
D�0 1
0 �a
� ��m.t /
!m.t /
�
C�0
b
�
u.t/;
where �m is the angle of the shaft, !m is its angular velocity, u is the normalized input voltage (so that
its maximal peak corresponds to ˙1), a D .Rf C K2m/=.RJ / > 0, and b D Km=.RJ / > 0, where R is
the armature resistance and Km is the torque constant of the motor. Assume that the initial condition is
�m.0/ D !m.0/ D 0 and the goal is to steer the shaft to the equilibrium �m.tf/ D �f and !m.tf/ D 0. Both
these equilibria correspond to u D 0, i.e. they are admissible. As both poles of the system are real, we
know that there might be at most one switch in .0; f/. Hence, consider (7.6) for � D 2. Because
exp
��
0 1
0 �a
�
t
�
D�
1 .1 � e�at /=a
0 e�at
�
;
condition (7.7) reads
b
�
.1� e�atf /=a .1 � e�a.tf�ts//=a 0
e�atf e�a.tf�ts/ 1
�2
4
�0
�2�0
�2
3
5 D b
�
1=a �e�atf=a
0 e�atf
� �
1 1 1
1 eats eatf
�2
4
�0
�2�0
�2
3
5 D 0
for j�0j D 1 and the final value condition reads
lims!1
�
s �10 s C a
��1 �
0
b
�
K.s/ D�
�f
0
�
H)(
K 0.0/ D a�f=b
K.0/ D 0
Note that the first row of this version of (7.7) is equivalent to K.0/ D 0, so one of these conditions is
redundant. Thus, we end up with two conditions
1 � 2eats C eatf D 0 and .2ts � tf/�0 D a�f=b: (7.9)
The first of these conditions reads eats D .eatf C 1/=2. Because the function et is strictly convex, the
latter equality implies that 2ts � tf > 0. The second condition of (7.9) yields then that �0 D sign �f and
134 CHAPTER 7. EXPLOITING DELAYSu
.t/
t
!m
.t/
t
�m
.t/
ttf
�f
ts
(a)
u.t
/
t
!m
.t/
t
�m
.t/
ttf
�f
ts
(b)
u.t
/
t
!m
.t/
t
�m
.t/
ttf
�f
ts
(c)
u.t
/
t
!m
.t/
t
�m
.t/
ttfts
(d)
Fig. 7.2: Time-optimal trajectories for a DC motor
2ts � tf D aj�fj=b or, equivalently, tf D 2ts � aj�fj=b. Substituting this value to the first equality of (7.9)
we get the quadratic equation
e�a2j�fj=be2ats � 2eats C 1 D 0 H) eats D ea2 j�fj=b�
1˙p
1 � e�a2j�fj=b�
:
We are interested in the “C” solution, as this is the only one for which ts < tf. Thus, we end up with
ts D a
bj�fj C 1
aln
�
1Cp
1� e�a2j�fj=b�
and tf D a
bj�fj C 2
aln
�
1Cp
1� e�a2j�fj=b�
:
Expectably, these are monotonically increasing functions of j�fj. The resulted trajectories of u.t/, !m.t /,
and �m.t / for various �f are presented in Fig. 7.2.
Time-optimal trajectories may provide an implicit justification for the input-shaping strategy. In fact,
time-optimal trajectories can be used as possible input shapers. However, they are hard to derive in general
and are known to be quite sensitive to modeling uncertainty.
7.1.5 Generating continuous-time FIR responses by general FIR systems
The controller architecture (7.6) is not the only possible one to produce an FIR PK for a finite-dimensional
P . This goal can be attained by various other structures of controllers. In this subsection we study such
kind of problems in a general setting, namely, as a general tracking setup.
A general open-loop tracking problem can be formulated as the two-block model-matching problem,
where the error system to be reduced is of the form
Ge D Gw CGuK; (7.10)
where Gw and Gu are finite-dimensional systems, given by their joint state-space realization
�
Gw.s/ Gu.s/�
D�A Bw Bu
C Dw Du
�
(7.11)
7.1. DEAD-BEAT OPEN-LOOP CONTROL 135
and K is a controller to find. In the particular case of Gw D I and Gu D �P this error system generates
the tracking error e D r � y in the standard open-loop setting, where K W r 7! u and the goal is to
render the plant output y close to a measured reference signal r . We may add a weighing function We
to emphasize important frequency ranges for the reduction of the tracking error or to account for a priori
information about properties of r . In such a case Gw D We and Gu D �WeP . It is also possible to include
the control signal u, maybe shaped by yet another weight Wu, as a part of the error signal. This yields
Gw D�
We
0
�
and Gu D�
�WeP
Wu
�
and use weights to trade-off tracking performance and control efforts. In some situations, disturbance
attenuation problems under measured disturbances can also be cast in the form (7.10).
The general tracking setup is often used to pose tracking problems as optimization problems, with the
requirement to minimize a norm, e.g. H2 or H1, of Ge. Any such problem should start with a suitable
definition of the class of admissible controllers K. Normally, admissability is understood as the stability
of the controller itself, viz. as K 2 H1, and of the error system, viz. as Ge 2 H1. But in some situations,
we might also required to strengthen requirements on the controller and the error system by requiring
them to be FIR, with the support of its impulse response in a given interval Œ0; � �. The stability of both
these systems is then guaranteed if the impulse response of K is bounded, which is easy to see.
All such admissible solutions are characterized in the following result:
Lemma 7.2. If .C; A/ in realization (7.11) is observable, then there is an FIRK with the impulse response
k.t/ supported in Œ0; � � such that Ge is also FIR with the same support iff
Bw CZ �
0
e�AsBuk.s/ds D 0: (7.12)
Proof. If K is FIR, then the impulse response of the error systems is
ge.t / D Dwı.t/C C eAtBw CDuk.t/C C
Z t
0
eA.t�s/Buk.s/ds (7.13)
D Dwı.t/C C eAtBw CDuk.t/1Œ0;��.t /C C
Z minft;�g
0
eA.t�s/Buk.s/ds:
We require ge.t / D 0 for all t > � , which reads then
ge.t / D C eAt
�
Bw CZ �
0
e�AsBuk.s/ds
�
D 0; 8t > �:
Equality (7.12) is obviously sufficient for that and its necessity follows by the observability of .C; A/.
Condition (7.12) can be interpreted in terms of attaining the state x.�/ D eA�Bw from the zero initial
conditions by the system Px D AxCBuu, with the resulted u taken as the impulse response of the controller
K. This is essentially the controllability question. If it is solvable, which is always the case if .A; Bu/ is
controllable, there are infinitely many solutions. This degree of freedom can be exploited in different ways.
For example, if the final state is supposed to be attained in minimum time under amplitude constraints on
u, we end up with the time-optimal solution discussed in ÷7.1.4. Another possibility is to endeavor to
attain the final point with the minimum-energy u:
Proposition 7.3. If .A; Bu/ in (7.11) is controllable, then
k.t/ D kME.t / ´ �B 0ue�A0t
�Z �
0
e�AsBuB0ue�A0sds
��1
Bw1Œ0;��.t /
has the minimum energy among all solutions to (7.12).
136 CHAPTER 7. EXPLOITING DELAYS
Proof. If .A; Bu/ is controllable, then kME.t / is well defined and satisfies (7.12), which can be shown by
direct substitution. Assume that k D kME C kı solves (7.12) as well. This necessarily implies that kı
satisfies the equation Z �
0
e�AsBukı.s/ds D 0:
Hence,
kkk22 D kkMEk2
2 C kkık22 C 2hkı ; kMEi D kkMEk2
2 C kkık22 C 2
Z �
0
ŒkME.t /�0kı.t /dt
D kkMEk22 C kkık2
2 C 2B 0w
�Z �
0
e�AsBuB0ue�A0sds
��1Z �
0
e�AtBukı.t /dt D kkMEk22 C kkık2
2:
Thus, kkMEk2 � kkk2 for every k satisfying (7.12) and the equality holds iff kı D 0.
To illustrate the minimum-energy algorithm of Proposition 7.3, return to the pendulum on the cart
problem presented in Fig. 7.1. To formulate it as a general tracking problem, consider a step reference
signal r for the cart position x and choose as regulated signals the tracking error r � x and the pendulum
angle � . The systems w 7!�
r�x�
�
, where w.t/ D Pr.t/, has the transfer function
�
Gw .s/ Gu.s/�
D�
1=s �1=s0 P.s/=s
�
D
2
66664
0 0 0 1 �10 �� ! 0 0
0 �! �� 0 1
1 0 0 0 0
0 ��=.l!/ 1=l 0 0
3
77775
;
where P.s/ is given by (7.2) and � and ! are defined by (7.3). The pair .A; Bu/ in the realization above
is controllable, so the formulae of Lemma 7.2 apply. To compare them with the response of the posicast
control, choose � D �=! D Tp=2, so that the response settles in the very same Tp=2. This choice yields
kME.t / D
�1 � e�Tp=2
�Tp
C 2
�e�t sin
�2�
Tp
t
��
1Œ0;Tp=2�.t /;
where ´ 2�.�2T 2p C 4�2/=.8�Tp.1C e�Tp=2/C .�2T 2
p C 4�2/.1� e�Tp=2//, which corresponds to the
transfer function
KME.s/ D
�e�Tp=2 � 1
�
1� e�Tps=2
s� 4.1C e�Tp=2e�Tps=2/
.s � �/2 C 4�2=T 2p
��#0��! 16�2
�2 � 8
�1� e�Tps=2
8Tps� 1C e�Tps=2
T 2p s
2 C 4�2
�
rather than (7.5). To simplify the formulae, consider now the undamped pendulum, i.e. assume that � D 0.
The cart position evolves then as
x.t/ D xf C 4xf
�2 � 8
�
1 � �2
4
�
1� 2
Tp
t
�
C cos
�2�
Tp
t
��
1Œ0;Tp=2�.t / D0
xf
Tp=2
instead of two posicast steps and the step response of PKME is
�.t/ D �
.�2 � 8/l
�
1� 4
Tpt
�
sin
�2�
Tpt
�
1Œ0;Tp=2�.t / D0
Tp=2:
Both x.t/ and �.t/ above are smoother than the corresponding signals for the posicast (bang-bang) case.
This is especially important regarding x, as the movements of the card should be feasible under actuator
limitations.
Yet another direction to exploit the freedom of the choice of K satisfying (7.12) is consider the H2
(quadratic) optimization problem associated with the model-matching setup (7.10). The result below
presents the solution to this problem.
7.1. DEAD-BEAT OPEN-LOOP CONTROL 137
(a) Cart position, x.t/ (b) Pendulum angle, �.t/
Fig. 7.3: H2-optimal FIR control of the pendulum on cart system in Fig. 7.1 under % D 1=500
Proposition 7.4. If .A; Bu/ in (7.11) is controllable, Dw D 0, and Du has full column rank, then
k.t/ D �.D0uDu/
�1�
D0uC B 0
u
�
eHt
��
I 0
0 0
�
C�
0 0
I 0
�
eH�
��1 �
Bw
0
�
1Œ0;��.t /;
where
H ´�
A 0
�C 0C �A0
�
��
Bu
�C 0Du
�
.D0uDu/
�1�
D0uC B 0
u
�
;
is the impulse response of the unique FIR controller rendering Ge FIR and minimizing its H2 norm. The
optimal cost
kGek22 D tr
�
B 0w
�
0 I���
I 0
0 0
�
C�
0 0
I 0
�
eH�
��1 �
I
0
�
Bw
�
:
Moreover, the solution in the case of C D 0 and D0uDu D I recovers that of Proposition 7.3.
Proof. Follows by applying the Projection Theorem [32, Sec. 3.3] under a barrier associated with (7.12).
Details, which are quite technical, can be found in [48] (it corresponds to the case of � D 0 there).
To illustrate this approach, consider again the problem studied throughout this section and define the
error system as the mapping
w 7!
2
4
p1� � .r � x/p
� �p% Px
3
5 ;
where w.t/ D Pr.t/, the weight � 2 Œ0; 1� serves to reach a required balance between settling the cart
position and oscillations of the pendulum and the weight % > 0 is used to regularize the problem (% D 0
would render Du D 0). An alternative for such a regularization might be the use of P� instead of Px, as the
transfer function of the system w 7! P� is also bi-proper. This yields the following
�
Gw.s/ Gu.s/�
D
2
4
p1� �=s �
p1� �=s
0p�P.s/=s
0p%
3
5 D
2
66666664
0 0 0 1 �10 �� ! 0 0
0 �! �� 0 1p1� � 0 0 0 0
0 �p� �=.l!/
p�=l 0 0
0 0 0 0p%
3
77777775
: (7.14)
The optimal solutions for several choices of � and % D 0:002 are presented in Fig. 7.3. The choice of
� D 1 penalizes only the pendulum angle. In this case the response, shown by cyan-blue lines, becomes
138 CHAPTER 7. EXPLOITING DELAYS
close to that of the posicast control (dashed lines). The other extreme, � D 0, penalizes only the deviation
of the cart position from its destination. It yields a smaller error energy, but larger pendulum oscillations,
see the yellow lines in Fig. 7.3.
The solution of Proposition 7.4 might not be numerically stable, as the matrix H typically has eigen-
values in C0. As a result, coefficients of its matrix exponential grow rapidly as � increases. For instance,
the computations in the example above fail for % � 10�3 because one eigenvalue of H grows above 30 in
this case, leading to the presence of terms O.e30� / in eH� . Nonetheless, this hurdle can be overcame by
exploiting the Hamiltonian structure of the matrix H . Namely, bring in the algebraic Riccati equation
A0X CXAC C 0C � .XBu C C 0Du/.D0uDu/
�1.B 0uX CD0
uC/ D 0; (7.15)
which is the same as (4.5a) on p. 69, modulo notational differences. If the realization .A; Bu; C;Du/ has no
pure imaginary invariant zeros, then [44, Sec. B.2] this ARE has a unique stabilizing solution X D X 0 � 0
such that ACBuF is Hurwitz, where F ´ �.D0uDu/
�1.BuXCD0uC/. We also need the matrix Lyapunov
equation
.AC BuF /W CW.AC BuF /0 C Bu.D
0uDu/
�1B 0u D 0; (7.16)
whose solution W > 0 because .AC BuF;Bu.D0uDu/
�1=2/ is controllable, which follows from the con-
trollability of .A; Bu/.
Proposition 7.5. If .A; Bu/ in (7.11) is controllable, Dw D 0,Du has full column rank, and .A; Bu; C;Du/
has no pure imaginary invariant zeros, then
K.s/ D ��
��AC BuF V�Bw
F 0
��
� ��
���.AC BuF /
0 W �1e.ACBuF /�V�Bw
FW C .D0uDu/
�1B 0u 0
�
e��s
�
; (7.17)
where V� ´ .I � W e.ACBuF /0�W �1e.ACBuF /�/�1, is the unique FIR controller rendering Ge FIR and
minimizing its H2 norm. The optimal cost
kGek22 D tr
�
B 0w.X CW �1.V� � I //Bw
�
:
Proof (outline). It can be verified that
H D�I �WX I �XW
� �AC BuF 0
0 �A0 � F 0B 0u
� �I �WX W
�X I
�
;
so that
eHt D�
I �WX I �XW
� �
e.ACBuF /t 0
0 e�.ACBuF /0t
� �
I �WX W
�X I
�
:
The formulae of the proposition follow then from those of Proposition 7.4 by a routine use of (A.3b).
Proposition 7.5 offers a numerically stable alternative to the formulae of Proposition 7.4. Indeed, the
impulse response of the controller in (7.17),
k.t/ D F e.ACBuF /tV�Bw1Œ0;��.t /� .FW C .D0uDu/
�1B 0u/ e.ACBuF /0.��t/W �1e.ACBuF /�V�Bw1Œ0;��.t /;
has the matrix exponential e.ACBuF /� and its transpose only for positive values of � . Because the matrix
AC BuF is Hurwitz, these exponentials do not tend to have growing components. Moreover, as � grows
unbounded, the second term in the right-hand side of (7.17) vanishes and we end up with the IIR con-
troller K.s/ D F.sI � A � BuF /�1Bw . The optimal kGek2 is a monotonically decreasing function of � ,
approaching tr.B 0wXBw / as � ! 1.
7.2. PREVIEW CONTROL 139
7.2 Preview control
There are situations, where delays are actually our allies, rather than foes. This happens, for example, in
applications where previewed information about reference or disturvance signals is available. Just think
of a frequently encountered situation of handling road potholes or speed bumps while driving, where the
ability to see those disturbances ahead of their effect is of a great help. Questions associated with ex-
ploiting previewed information arise in many applications, such as signal processing, automotive control,
robotics, control of wind turbines, and so on and so forth. From the analysis viewpoint, it might be con-
venient to treat such problems via introducing time delays into regulated signals, which renders them a
natural subject of these notes.
We again consider the general tracking setting introduced in ÷7.1.5, but now with the error system
Ge W w 7! e of the form
Ge D GwxD� CGuK; (7.18)
where Gw and Gu are finite-dimensional systems given by (7.11) and � > 0 is the preview. The goal is to
design a causal and stable controller K to minimize a size of Ge. The term “preview tracking” becomes
apparent in the case when Gw D I and Gu D �P . The error system in this case, Ge D xD� � PK, defines
the error signal as e.t/ D w.t � �/ � y.t/. This implies that the plant output y is expected to track a past
reference, which was available at the controller side � time units ago. In other words, the controller has a
previewed version of the reference signal w.
A key fact on addressing energy-based preview problems is that the L2-norm of the error signal gen-
erated by (7.18) is equivalent to the L2-norm of the output of another system having no preview for every
exogenous input w. To formulate this result, we need to the matrix
H D�
H11 H12
H21 H22
�
´�
A 0
�C 0C �A0
�
��
Bu
�C 0Du
�
.D0uDu/
�1�
D0uC B 0
u
�
(7.19)
(we already saw it in Proposition 7.4) and its exponential
˙ D�
˙11 ˙12
˙21 ˙22
�
´ exp
��
H11 H12
H21 H22
�
�
�
;
in which det˙22 ¤ 0, see [47, Prop. 3.1]. We also need the matrices
QBw ´ ˙ 022Bw C˙ 0
12C0Dw and QC ´ C CDu.D
0uDu/
�1.D0uC.˙
022 � I / � B 0
u˙021/:
The following lemma, whose proof can be found in [45], is a key technical result towards handling preview
tracking problems:
Lemma 7.6. Given Gw and Gu as in (7.11) with a full column rank Du, consider yet another erorr systemQGe W w 7! Qe such that QGe D QGw C QGu.K �˘/, where
� QGw.s/ QGu.s/�
D�
A QBw Bu
QC˙�022 Dw Du
�
(7.20a)
and
˘.s/ D .D0uDu/
�1
0
@��
8
<
:
2
4
H11 H12 Bw � Bu.D0uDu/
�1D0uDw
H21 H22 �C 0.I � .D0uDu/
�1D0u/Dw
D0uC B 0
u D0uDw
3
5 e��s
9
=
;�D0
uDw
1
A : (7.20b)
If K is such that Ge 2 H1, then so is QGe and, moreover, kGewk2 D k QGewk2 for all w 2 L2.RC/.
140 CHAPTER 7. EXPLOITING DELAYS
Lemma 7.6 fits both H2 and H1 optimization settings. Consider the H2 case first. As follows from
(4.1), the squared H2-norm of the error system equals the sum of the squared L2.RC/-norms of its re-
sponses to Dirac ı-impulses applied in each input direction. By Lemma 7.6, kGeeiık2 D k QGeeiık2 for
every i and every controller, so that kGek2 D k QGek2 for allK. Hence, a controller minimizing kGek2 does
that for k QGek2 as well. In the H1 case, the cost equals the L2.RC/-norm of the output of the error system
under a worst-case unit-energy exogenous signal w. Because the L2.RC/-norms ofGew and QGew coincide
for every w and every K, they also coincide for a worst-case w. This implies that kGek1 D k QGek1 for
every K, so that the H1 problems for G and QG are equivalent.
One day it should be written . . .
7.3 Stabilizing delays
We saw, see e.g. discussions in Section 2.1, that loop delays normally have destabilizing effects on feed-
back loops. Nevertheless, there is a number of examples in the literature, demonstrating that added delays
may have stabilizing effect on control loops. We consider one of them below.
Example 7.1. This example apparently originates in an SNL report of 1991, which was later on published
as [62]. Consider an undamped mass-spring system u 7! y having the transfer function
P.s/ D 1
ms2 C k;
for some known mass m > 0 and spring coefficient k > 0. Obviously, this system cannot be stabilized by
any proportional controller. But it can be stabilized by a delayed feedback of the form u.t/ D �kpy.t � �/for kp ¤ 0. Indeed, in this case the characteristic quasi-polynomial is
�� .s/ D ms2 C k C kpe��s;
which is of the form (2.7). Let us use the delay sweeping method studied in ÷2.1.5 in its analysis. As-
sumptions A1–4 on p. 28 hold iff kp ¤ �k and the delay-free is always unstable. The magnitude relation
(2.8a) in this case has the form jm!2 � kj D jkpj, which is solved by
!2 D k ˙ jkpjm
:
If jkpj � k, then there is one nonzero crossing frequency and it is always a switch. Hence, this kp is
stabilizing for none � . If 0 < jkpj < k, then there are two positive crossing frequencies,
!1 Dr
k C jkpjm
and !2 Dr
k � jkpjm
;
so that m!21 > k and m!2
2 < k for all kp. The corresponding phase relations (2.8b) reads
�! D arg kp � arg.k �m!2/C .2i � 1/�; i 2 Z:
This leads then to crossing delays
�1i D 2�
!1
i C 1
!1
(
� if kp < 0
0 if kp > 0and �2i D 2�
!2
i C 1
!2
(
0 if kp < 0
� if kp > 0
for all i 2 ZC, where �1i correspond to switches and �2i correspond to reversals. Because the zero-delay
system is unstable and the step in the switches is always smaller than that in reversals, there are stabilizing
7.3. STABILIZING DELAYS 141
kp’s only if �20 < �10. This is always the case if kp < 0 and never the case if kp > 0. Thus, we must have
kp 2 .�k; 0/, which implies positive feedback. And then we have the following chain of crossing delays
�j D�
0;�
pm
p
k � kp
;2�
pm
p
k C kp
;3�
pm
p
k � kp
;4�
pm
p
k C kp
;5�
pm
p
k � kp
; : : :
�
(gray elements mark reversals). As far as there is one reversal between two subsequent switches, the
system becomes stable at each reversal. This alternation of stabilizing and destabilizing delay intervals
ends at the point where �2i � �1i or, equivalently, where
2i�pm
p
k C kp
� .2i C 1/�pm
p
k � kp
” i �� p
1C kp=k
2.p
1� kp=k �p
1C kp=k/
�
:
Thus, the system can always be stabilized by a (positive) delayed feedback. O
There are other examples in the literature, where loop delays are introduced to feedback loops to
control finite-dimensional systems. It appears that there are essentially two classes of problems for which
such approaches are considered.
1. One use of delayed feedback is in problems of feedback dampening lightly damped systems, like that
considered in Example 7.1 above. It is known, at least since [78, 62], that it might be advantageous to
add a phase lag into the feedback loop in such problems, cf. the loops in Fig. 2.3 on p. 26. A skillful
use of this approach can yield superior closed-loop performance, as demonstrated by the solutions to
the ECC’95 benchmark problem [50, 28], where finite-dimensional lag elements are used. Delays can
also provide such lags. An example of that is the concept of delayed resonators, introduced in [52].
Still, it is not quite clear to me what advantage such delays can have over more orthodox rational lag
elements. The analysis of delayed systems is more complex and phase lag introduced by the delay
element grows unbounded, which might boomerang at high frequencies. Finite-dimensional lags are
less prone to such problems, as they can be analyzed by conventional tools and their lags are bounded.
2. Another direction may be associated with the use of delays in approximating derivative terms, essen-
tially as
s � ��.s/ ´ 1 � e��s
�(7.21)
for a sufficiently small � > 0. This is done either explicitly, like in the problem of stabilizing a chain
of of integrators in [49], or implicitly, e.g. in the proportional-minus-delay controller, proposed in [69]
as an alternative to PD. But it is not clear to me, again, what are advantages of this approximation
over more conventional solutions, like a rational approximation of the form
s � �r;T .s/ ´ s
T s C 1(7.22)
for a sufficiently small T > 0. For example, let us compare the accuracy of these approximations
over a given low-frequency range, say Œ0; Q!�. To render such a comparison fair, assume that both
approximations have equal high-frequency gain, i.e. that lim sup!!1j��.j!/j D lim!!1j�r;T .j!/j.This requirement yields the relation 2=� D 1=T or, equivalently, T D �=2. As the measure of the
approximation accuracy consider then the quantifies
�� . Q!/ ´ max!2Œ0; Q!�
ˇˇˇˇ1� ��.j!/
j!
ˇˇˇˇ
and �r;�=2. Q!/ ´ max!2Œ0; Q!�
ˇˇˇˇ1 � �r;�=2.j!/
j!
ˇˇˇˇ:
The plots of these functions of the Q! are presented in Fig. 7.4. It is clear that from this point of view
the rational approximation (7.22) is a better choice than (7.21) for all frequency ranges and all � .
And, on top of this, adding delays into feedback loop renders their analysis more involved.
142 CHAPTER 7. EXPLOITING DELAYS
1
1:26
0 2:33 4:09 � Q!
��.� Q!/
�r;�=2.� Q!/
Fig. 7.4: Errors of approximating the derivative by (7.21) and (7.22)
The situation might be different for infinite-dimensional plants. We already saw that DTC elements,
which contain delays, play an important role in the control of dead-time systems. As another example,
consider the LTI system Gx defined via its transfer function in (1.21) on p. 9, which describes the acoustic
pressure in a duct of length L, measured at the distance x from the actuation point. For x D L and the
static reflectance R.s/ D 1 it becomes
GL.s/ D 2Zm
1� e�2�se��s;
where � D L=c. It might not be easy to design a finite-dimensional stabilizing controller for it. However,
this system is clearly stabilized by the delayed control law u.t/ D �pL.t � �/=.2Zm/, which actually
renders the closed-loop system FIR. Still, this is more a toy example, as the delayed control law above has
zero tolerance to uncertainty in � .
7.4 Delays in the regulator problem: repetitive control
The term “regulator problem” is referred to the problem of rejecting the effect of non-decaying distur-
bances, generated as the response of known autonomous models to nonzero initial conditions, on con-
trolled variables. Common models for such disturbances are the integrator 1=s, which generates constant
signals, the double integrator 1=s2, which generates ramp signals, and the harmonic oscillator 1=.s2 C!20/,
which generates sine waves with the frequency !0. The internal model principle [14], by which models
of exogenous signals should be incorporated into the controller, is one of fundamental principles of the
control theory. For example, the need in the use of intergal actions to counteract the effect of constant
or slowly varying unmeasured disturbances has been known for aeons. Likewise, harmonic disturbances
of a known frequency can be asymptotically canceled if the feedback loop includes a harmonic oscil-
lator. Incorporating models of disturbances into feedback loops reduces the redulator problem to mere
stabilization problems.
The use of delays can generalize this idea to a substantially wider class of exogenous signals, viz. all
periodic signals of a known period. Indeed, consider the system described by the autonomous difference
equation
w.t/ D w.t � �/; Mw0 D � (7.23)
for some function �.t/ defined in Œ��; 0�, where Mwt is defined according to (1.6) on p. 2. It is readily seen
that this equation generates the �-periodic function w.t/ D �.t ��dt=�e/. Thus, any �-periodic signal can
be generated by (7.23) by an appropriate choice of the initial condition function �.
The application of the internal model principle to (7.23) is conceptually straightforward. Namely,
this system has the transfer function 1=.1 � e��s/, so it should be incorporated into the controller. The
application of this logic in the unity-feedback setting is shown in Fig. 7.5(a), where P is a SISO plant
and K is a free part of the controller to be designed. This is a paraphrase of the architecture proposed in
[22], dubbed repetitive control. As with any other controller constructed according to the internal model
principle, it guarantees asymptotic rejection of all �-periodic disturbances d and asymptotically perfect
7.4. DELAYS IN THE REGULATOR PROBLEM: REPETITIVE CONTROL 143
y u re
d
P.s/ K.s/
e��s
-
(a) Idealized setup
y u re
d
P.s/ K.s/
F.s/e��s
-
(b) More practical setup
Fig. 7.5: Unity-feedback repetitive controller
tracking of all �-periodic reference signals r , provided the closed-loop system is stable and P.s/K.s/ has
no zeros at the poles of the repetitive element, i.e. at s D j2k�=� , k 2 Z. A way to see that is via the
closed-loop disturbance sensitivity transfer function
Td.s/ D P.s/.1� e��s/
1� e��s C P.s/K.s/;
which has zeros at s D j2k�=� for all k 2 Z then. Every piecewise smooth periodic disturbance can be
presented via its Fourier series as
d.t/ DX
k2Z
dkej.2k�=�/t
for some Fourier coefficients dk 2 C. By the frequency response theorem, if the closed-loop system is
stable, then its steady-state response to this disturbance is
yss.t / DX
k2Z
Td.j2k�=�/dkej.2k�=�/t D 0: (7.24)
The tracking error can be analyzed in a similar manner via the sensitivity transfer function, which also has
all pure imaginary poles of the controller as its zeros.
However, the stability of repetitive control system happens to be a challenge. To see why, assume that
P and K are finite dimensional and rearrange the setup in Fig. 7.5(a) in the form presented in Fig. 2.1 on
p. 19. Taking�
rd
�
as the “u” input there and y as its namesake, this corresponds to
�
G´w G´u
Gyw Gyu
�
D�
0 0 0
1 1 0
�
C�
1
�1
�1
1C PK
�
1 1 �P�
:
If P.s/K.s/ is strictly proper, which is a reasonable assumption to make, then G´w.1/ D 1 and we have
the case of �.D´w/ D 1 in terms of the discussion in ÷2.1.3. Hence, no matter what K is selected, the
closed-loop system is practically unstable.
The situation is not hopeless though. There are workarounds, endeavoring to regain stability at the
expense of relaxing perfect regulation requirements. For example, one can use the architecture presented
in Fig. 7.5(b) for a stable filter F such that jF.1/j < 1. In this case the equivalent
�
G´w G´u
Gyw Gyu
�
D�
0 0 0
1 1 0
�
C�
F
�1
�1
1C PK
�
1 1 �P�
:
so that G´w.1/ D F.1/ satisfies �.D´w/ < 1 and the system can be stabilized. For example, we may
consider designing K so that kF=.1 C PK/k1 < 1. In this case the closed-loop system in Fig. 7.5(b) is
stable for all constant delays � , by the arguments of ÷6.2.3 corresponding to ~ D 0. Other, more refined,
approaches are also possible, see Example 7.2 below.
The price we pay is in the ability to reject all periodic disturbances. Indeed, the disturbance sensitivity
reads now
Td.s/ D P.s/.1� F.s/e��s/
1 � F.s/e��s C P.s/K.s/;
144 CHAPTER 7. EXPLOITING DELAYS
has a zero at j2k�=� only if F.j2k�=�/ D 1, which can happen only at a finite number of k’s. Still, we
can choose F so that j1 � F.j2k�=�/j � 1 at all k 2 Z of interest. A well-justified requirement is to
maintain this condition for a number of small indices k. This could effectively eliminate low-frequency
Fourier terms in (7.24), while high-frequency components are normally filtered out by the plant itself. If
F.0/ D 1, then the controller still has an integral action. Thus, it might make sense to choose a low-pass
F with the unit static gain. The bandwidth of F can then be a tuning parameter, with which the range of
rejected signals can be selected.
Example 7.2. Consider the problem of rejecting �-periodic load disturbance for the plain integrator plant
with the transfer function P.s/ D 1=s. Employing the repetitive controller in the form shown in Fig. 7.5(b)
for some low-pass F 2 H1, we are left with one problem to solve, how to design a stabilizing K for the
infinite-dimensional system
Prep D P
1 � F xD�
:
The trick is to rewrite its transfer function in the form
Prep.s/ D P.s/
1� P.s/sF.s/e��s;
which is the feedback interconnection of P and ˘rep 2 H1, where ˘rep.s/ D sF.s/e��s . This suggests
that we may use loop-shifting extensions discussed in ÷3.4.4. Specifically, loop-shifting arguments yield
that K stabilizes Prep iff QK D K � ˘rep stabilizes P . The latter problem is simple, e.g. it can be solved
with any proportional controller QK.s/ D kp. Thus, we end up with
K.s/ D kp C sF.s/e��s ;
which is easy to implement. The resulting disturbance sensitivity transfer function
Td.s/ D P.s/
1C P.s/ QK.s/�
1� F.s/e��s�
D 1
s C kp
�
1 � F.s/e��s�
is reminiscent of the Smith controller in that the response is that of the system without the repetitive block,
multiplied by the repetitive action.
Consider now the choice
F.s/ D 2�˛
�s C 2�˛;
which is a low-pass filter with the bandwidth 2�˛=� , i.e. ˛ times the fundamental frequency of d . Here ˛
is a tuning parameter, affecting the behavior of F.j!/ at frequencies ! D 2k�=� for k 2 Z. It is readily
seen that
j1� F.j2k�=�/j D 1p
1C ˛2=k2;
which is a strictly decreasing function of ˛ and a strictly increasing function of k, approaching 1 as k
grows.
Simulations of the closed-loop disturbance response for this filter under kp D 1 and � D � are
presented in Fig. 7.6. The first row in Fig. 7.6 shows responses to a square wave disturbance and the
second—to a sawtooth d , both of unit amplitude. The simulations agree with the analysis above that
higher-bandwidth F result in a better attenuation of periodic load disturbances. Note that the disturbance
response in the first interval t 2 Œ0; � � does not depend on ˛. This is because during this interval the initial
conditions of the controller are zero and the repetitive part is not efficient. The controller then “learns” the
actual waveform of the disturbance and compensates it. O
7.4. DELAYS IN THE REGULATOR PROBLEM: REPETITIVE CONTROL 145
(a) square wave, ˛ D 10 (b) square wave, ˛ D 100 (c) square wave, ˛ D 1000
(d) sawtooth wave, ˛ D 10 (e) sawtooth wave, ˛ D 100 (f) sawtooth wave, ˛ D 1000
Fig. 7.6: Disturbance responses under kp D 1
146 CHAPTER 7. EXPLOITING DELAYS
Appendix A
Background on Linear Algebra
T HIS APPENDIX collects some advanced material on matrices. These results are required throughout
these notes, yet are not always a part of basic Linear Algebra courses.
A.1 Schur complement
Let M be a square matrix partitioned as
M D�
M11 M12
M21 M22
�
; (A.1)
with square M11 and M22. If M11 is nonsingular, then a direct substitution yields the decomposition�
M11 M12
M21 M22
�
D�
I 0
M21M�111 I
� �
M11 0
0 M22 �M21M�111 M12
� �
I M�111 M12
0 I
�
: (A.2a)
It follows from this equality that M is nonsingular iff so is the matrix
�11 ´ M22 �M21M�111 M12;
which is called the Schur complement of M11 in M . Similarly, if M22 is nonsingular, then�
M11 M12
M21 M22
�
D�
I M12M�122
0 I
� �
M11 �M12M�122 M21 0
0 M22
� �
I 0
M�122 M21 I
�
(A.2b)
and the Schur complement of M22 is defined as
�22 ´ M11 �M12M�122 M21:
Using equations (A.2) the following formulae for the inversion of block 2 � 2 matrices can be derived:
�
M11 M12
M21 M22
��1
D�
I �M�111 M12
0 I
� �
M�111 0
0 ��111
� �
I 0
�M21M�111 I
�
(if detM11 ¤ 0)
D"
M�111 CM�1
11 M12��111M21M
�111 �M�1
11 M12��111
���111M21M
�111 ��1
11
#
(A.3a)
D�
I 0
�M�122 M21 I
� �
��122 0
0 M�122
� �
I �M12M�122
0 I
�
(if detM22 ¤ 0)
D"
��122 ���1
22M12M�122
�M�122 M21�
�122 M�1
22 CM�122 M21�
�122M12M
�122
#
: (A.3b)
147
148 APPENDIX A. BACKGROUND ON LINEAR ALGEBRA
The formulae above are particularly simple in the case of block-triangular matrices:
�M11 M12
0 M22
��1
D�M�1
11 �M�111 M12M
�122
0 M�122
�
(A.4a)
and�
M11 0
M21 M22
��1
D�
M�111 0
�M�122 M21M
�111 M�1
22
�
: (A.4b)
Another useful consequence of the decompositions in (A.2) is the following formulae for the determi-
nant of block 2 � 2 matrices:
detM11 ¤ 0 H) detM D detM11 det�11 (A.5a)
and
detM22 ¤ 0 H) detM D detM22 det�22 (A.5b)
(the determinants of the triangle factors in (A.2) obviously equal one).
A.2 Sign-definite matrices
An n � n matrix M D M 0 is said to be positive definite, denoted M > 0, if
x0Mx > 0; 8x ¤ 0:
It is positive semi-definite, denoted M � 0, if
x0Mx � 0; 8x:
The notions of negative definite (M < 0) and semi-definite (M � 0) matrices can be defined in a similar
way or via the relations M < 0 ” �M > 0 and M � 0 ” �M � 0. Compatibly dimensioned
symmetric (or Hermitian) matrices can be compared. We say that M1 > M2 if M1 � M2 > 0 and that
M1 � M2 if M1 � M2 � 0. A criterion of positive definiteness (positive semi-definiteness) is that all
eigenvalues of M D M 0, which are real, are positive (nonnegative). Unlike scalars, matrices need not be
wither positive or negative (semi)definite. There are matrices that are not sign definite, like M D�
1 00 �1
�
.
If a matrix T 2 Rn�m, where m � n, has full rank, then M > 0 only if T 0MT > 0. Indeed, if
T 0MT 6> 0, there there is y ¤ 0 such that y0T 0MTy � 0. But then x0Mx � 0 for x D Ty ¤ 0. If m D n,
then the relation goes both ways, namely, M > 0 ” T 0MT > 0. These properties help to simplify
the analysis of block 2 � 2 matrices of the form (A.1). For example, it is readily seen that
�
M11 M12
M21 M22
�
> 0 H) M11 > 0 ^ M22 > 0:
Moreover, it follows from (A.2) that
�
M11 M12
M21 M22
�
> 0 ” M11 > 0 ^ �11 > 0 ” M22 > 0 ^ �22 > 0 (A.6)
(these relations use the fact that the positive-definiteness of a block-diagonal matrix is equivalent to that
of each its diagonal block).
A.3. LINEAR MATRIX EQUATIONS 149
A.3 Linear matrix equations
Let A1 2 Rn1�n1 , A2 2 R
n2�n2 , and Q 2 Rn1�n2 . The following linear matrix equation:
A1X �XA2 CQ D 0 (A.7)
is called the Sylvester equation. It has a unique solution X 2 Rn1�n2 iff
spec.A1/ \ spec.A2/ D ;;
i.e. iff none of the eigenvalues of A1 is also an eigenvalue of A2. If this condition fails to hold, then
the Sylvester equation might have either no solutions or an infinite number of solutions (depending on
Q). The following result establishes the connection between the solvability of (A.7) with the block-
diagonalizability of certain block-triangular matrices:
Proposition A.1 (Roth’s removal rule). Equation (A.7) is solvable iff the matrices
�A1 Q
0 A2
�
and
�A1 0
0 A2
�
are similar.
An important particular case of the Sylvester equation is the so-called (continuous-time) Lyapunov
equation, which is defined as
AX CXA0 CQ D 0 (A.8)
for given A 2 Rn�n and Q 2 R
n�n. If Q is symmetric (i.e. Q D Q0), then the solution X is symmetric
too. Furthermore, if A is Hurwitz, i.e. if A has all its eigenvalues in the open left half plane C n xC0, then
X DZ
RC
eAtQeA0t dt (A.9)
exists and is the solution of (A.8). Indeed, it is readily seen that
A eAtQeA0t C eAtQeA0tA0 D d
dt
�
eAtQeA0t�
:
Thus, if the integral in (A.9) exists,
A
Z
RC
eAtQeA0t dt CZ
RC
eAtQeA0tdtA0 CQ DZ
RC
d�
eAtQeA0t�
CQ D 0;
where the fact that limt!1 eAtQeA0t D 0 was used.
The following result reveals an important connection between the existence of positive definite solu-
tion of a Lyapunov equation and the stability of matrices:
Proposition A.2. A matrix A 2 Rn�n is Hurwitz iff the solution X 2 R
n�n of the Lyapunov equation (A.8)
satisfies X D X 0 > 0 whenever Q D Q0 > 0.
Proof. First, assume that A is Hurwitz. Clearly, .A;Q/ is controllable (follows from the non-singularity
of Q), so that X > 0 by (A.9). Now, let X > 0 for some Q > 0. Assume that A is not Hurwitz, i.e. that
it has an eigenvalue � such that Re� � 0. Denote the corresponding eigenvector by � ¤ 0 and pre- and
post-multiply (A.8) by �0 and �, respectively. We have:
��0Q� D �0.AX CXA0/� D ��0X�C ��0X� D 2Re��0X�:
This, in turn, implies that Re��0X� < 0, which is a contradiction.
150 APPENDIX A. BACKGROUND ON LINEAR ALGEBRA
Proposition A.2 actually says that the stability of A is equivalent to the existence of a matrix X DX 0 > 0 such that
AX CXA0 < 0: (A.10)
Inequality (A.10) belongs to the so-called class of Linear Matrix Inequalities (LMI), for the verification
of which efficient numerical methods are available. For this reason (A.10) can be considered an alterna-
tive to the conventional verification of eigenvalues of A. More important is that the LMI (A.10) can be
incorporated into many other analysis and design problems that also reduce to LMIs.
Appendix B
Background on Linear Systems
T HE PURPOSE of this appendix is to collect some basic facts about (mostly finite-dimensional and
time-invariant) linear systems. The material, which is required throughout the notes, is presented in
a condensed matter. More details can be found in [44], whose philosophy is followed below, as well as in
many other textbooks.
B.1 Signals and systems in time domain
Signals are functions of independent variables (mainly time), conveying information about changing phe-
nomena. Systems are constraints imposed on interdependent signals, like those between forces and posi-
tions in mechanical systems or between voltages and currents in electrical systems or between pressures
and volume flow rates in hydraulic systems, et cetera. As customary, throughout these notes we treat one
group of signals as an action (inputs) and another one—as a reaction (outputs), which is sufficiently gen-
eral in many situations, especially for processes with delays. A system G with an input u and an output y
can then be viewed as a mapping G W u 7! y.
B.1.1 Continuous-time signals and systems
Continuous-time signals are viewed as functions defined on some subset of the real axis R. For many good
reasons, control applications are mostly concerned with signals defined on the positive semi-axis RC, in
which case an n-dimensional continuous-time signal f .t/ is understood as
f W RC ! Rn:
The i th component of f .t/, denoted fi.t /, is a scalar function of time t . Signal spaces are used to constrain
the set of considered signals in some way, formalizing the notion of admissible signals. We mostly use the
space
Ln2 .RC/ ´
�
f W RC ! Rn
ˇˇ kf k2 ´
�Z
RC
kf .t/k2dt
�1=2
< 1�
(B.1)
for this purpose. The squared L2-norm, kf k22, can be interpreted as the energy of f . Thus, L2 can be seen
as the space of finite-energy signals.
Linear continuous-time systems withm-dimensional inputs and p-dimensional outputs are understood
as linear operators G W DG � Lm2 .RC/ ! Lp
2.RC/ for some domain DG . A system G is said to be stable
(or i/o stable) if DG D Lm2 .RC/ and kGkL2!L2
´ supkuk2D1kGuk2 < 1. The norm of G defined by the
last expression is called its L2-induced norm. We say that G W u 7! y is causal if y.t/ D 0 for all t � tcwhenever u.t/ D 0 for all t � tc for every tc 2 RC. In other words, a system is causal if its output at
151
152 APPENDIX B. BACKGROUND ON LINEAR SYSTEMS
every time instance tc can only depend on the past and present inputs. A linear system G is said to be time
invariant (abbreviated LTI) if GS� D S�G for all � > 0, where S� is the �-shift operator, defined via
.S�u/.t/ D(
0 if 0 � t < �
u.t � �/ if � � t(B.2)
This effectively says that a time-shifted input produces a time-shifted, but otherwise unchanged, output.
A fairly general class of p � m causal LTI systems G W u 7! y can be described by the convolution
integral
y.t/ DZ
RC
g.t � s/u.s/ds DZ
RC
Qg.t � s/u.s/ds CX
i2ZC
giu.t � ~i /; (B.3)
where
g.t/ D Qg.t/CX
i2ZC
giı.t � ~i / (B.4)
for a bounded on any bounded subset of R function Qg W R ! Rp�m such that Qg.t/ D 0 whenever t < 0,
bounded gi 2 Rp�m, and a strictly increasing sequence f~igi2ZC
, such that ~0 D 0 and ~i � ~i�1 � � for
all i 2 ZC and some constant � > 0, independent of i . The (generalized) function g is called the impulse
response of G, i.e. it is the response of G to the Dirac delta impulse1 applied at the time instance t D 0.
An important class of LTI systems are finite-dimensional systems. These are systems for which there
are n 2 ZC and matrices A 2 Rn�n, B 2 R
n�m, C 2 Rp�n, and D 2 R
p�m such that
g.t/ D Dı.t/C C eAtB1.t / (B.5)
(i.e. Qg.t/ D C eAtB1.t /, g0 D D, and gi D 0 for all i > 0 in (B.4)). Given any tc � 0, the response of
such a system at all t � tc can be expressed as
y.t/ D Du.t/CZ t
0
C eA.t�s/Bu.s/ds D Du.t/CZ t
tc
C eA.t�s/Bu.s/ds C C eA.t�tc/x.tc/; (B.6)
where the n-dimensional vector
x.t/ ´Z t
0
eA.t�s/Bu.s/ds (B.7)
is called the state vector of G. The meaning of the second relation in (B.6) is that the knowledge of x.tc/
is sufficient to account for the effect of the past, t < tc, on the behavior of the system afterwards. In
other words, the state vector is a history accumulator. By differentiating (B.7), we get the state-space
description,
G W(
Px.t/ D Ax.t/C Bu.t/; x.0/ D 0
y.t/ D Cx.t/CDu.t/;(B.8)
which is a differential, rather than integral, equation. The state-space representation (B.8) might be more
convenient to handle than the convolution integral (B.3). The quadrapole .A; B; C;D/ is referred to as a
state-space realization of a system G. In some situations the initial condition x.0/ is more convenient to
be taken equal to some x0 ¤ 0. This aims at accounting for the behavior of the system in t < 0. In such a
case we can treat G as an operator DG � Rn � Lm
2 .RC/ ! Lp2.RC/.
There is also an alternative notion of the stability for systems given by their state equation (B.8). A
linear system is said to be Lyapunov stable if for every � > 0 there is ı D ı.�/ such that kx.t/k < � for
all t 2 RC whenever kx.0/k < ı. The asymptotic stability requires in addition that limt!1kx.t/k D 0. It
is known that G is Lyapunov stable iff A has no eigenvalues in the open right-half plane C0 and its pure
1More precisely, the j th column of g.t/ is the response of G to the input ej ı.s/, where ej is the j th standard basis on Rm.
B.1. SIGNALS AND SYSTEMS IN TIME DOMAIN 153
imaginary eigenvalues are simple. G is asymptotically stable iff A has no eigenvalues in the closed right-
half plane xC0. Note that these notions do not account for exogenous input signals explicitly2, considering
the system autonomous for t > 0. This is unlike the i/o stability notion, which assumes equilibrium at
t D 0 and is driven by u. Nonetheless, the i/o and Lyapunov stability properties are essentially equivalent
for finite-dimensional systems. Namely, under a mild minimality assumption, G is i/o stable iff its state
equation is asymptotically stable. The situation is less trivial for infinite-dimensional systems though.
B.1.2 Discrete-time signals and systems
Discrete-time signals are functions defined on some subset of the real axis Z. An n-dimensional discrete-
time signal f Œt � on the nonnegative semi-axis ZCis understood as
f W ZC ! Rn:
The discrete counterpart of the L2 space is
`n2.ZC/ ´
�
f W ZC ! Rn
ˇˇ kf k2 ´
� X
t2ZC
kf Œt �k2�
1=2< 1
�
(B.9)
and the squared `2-norm, kf k22, can be also interpreted as the energy of f , rendering `2 the space of
finite-energy signals.
Linear discrete-time systems with m-dimensional inputs and p-dimensional outputs are understood as
linear operators G W DG � `m2 .ZC/ ! `p2.ZC/ for some domain DG . A system G is said to be stable if
DG D `m2 .ZC/ and kGk`2!`2
´ supkuk2D1kGuk2 < 1. The norm of G defined by the last expression
is called its `2-induced norm. We say that G W u 7! y is causal (strictly causal) if yŒt � D 0 for all t � tcwhenever uŒt � D 0 for all t � tc (all t < tc) for every tc 2 ZC. In other words, a system is causal if
its output at every time instance tc can only depend on the past and present inputs and strictly causal if
its output at every tc can only depend on the past inputs. A linear system G is said to be shift invariant
(abbreviated LSI) if GS1 D S1G, where the unit shift S1 is defined similarly to the continuous-time shift
in (B.2) for � D 1.
A general class of p �m causal LSI systems G W u 7! y can be described by the convolution sum
yŒt � DX
s2Z
gŒt � s�uŒs� (B.10)
for a bounded on any bounded subset of Z function g W Z ! Rp�m such that gŒt � D 0 whenever t < 0.
The function g, which is the response of the system to the unit pulse applied at t D 0, is called the impulse
response of G. Finite-dimensional LSI systems are those with
gŒt � D DıŒt �C CAtB1Œt � (B.11)
for some n 2 ZC and matrices A 2 Rn�n, B 2 R
n�m, C 2 Rp�n, and D 2 R
p�m. For such systems there
is a (non-unique) n-dimensional state vector
xŒt � ´t
X
sD0
At�sBuŒs�: (B.12)
It is a history accumulator, exactly like in the continuous-time case. Finite-dimensional systems can be
presented in their state-space description,
G W(
xŒt C 1� D AxŒt �C BuŒt �; xŒ0� D 0
yŒt � D CxŒt �CDuŒt �;(B.13)
which is a difference equation, similarly to the differential equation in the continuous-time case.
2If we think of variables in (B.8) as deviations from an equilibrium, then a possibly constant u is involved into the definition.
154 APPENDIX B. BACKGROUND ON LINEAR SYSTEMS
B.2 Signals and systems in transformed domains
The Fourier transform of a continuous-time signal f W RC ! Fn is defined as
Fff g D F.j!/ ´Z
RC
f .t/e�j!tdt; (B.14)
where ! 2 R is called the (angular) frequency and measured in radians per time unit (e.g. per second). The
signal Fff g is called the frequency-domain representation of f . The term “frequency” becomes apparent
in the inverse Fourier transform formula,
F�1fF g D f .t/ D 1
2�
Z
R
F.j!/ej!t d!; (B.15)
which effectively says that f .t/ is a superposition of harmonic signals ej!t with frequencies !. The
Fourier transform F.j!/, called also the spectrum of f , can then be viewed as the weight of the harmonic
ej!t in f .t/. The set of Fourier transformable signals is quite limited. The integral in (B.14) converges, in
norm, essentially only for L2.RC/-signals. The one-sided (or unilateral) Laplace transform of f W RC !C
n, defined as
Lff g D F.s/ ´Z
RC
f .t/e�stdt (B.16)
for all s 2 C for which the integral converges, can be viewed as a generalization of the Fourier transform,
rendering a wider class of signals transformable. The set of s 2 C for which (B.16) exists is called the
region of convergence (or RoC) of F.s/.
An important property of integral transforms is that they turn convolutions into products. Namely, if
G W u 7! y is LTI, then
Ffyg D Ffgg Ffug or Y.j!/ D G.j!/U.j!/
provided Ffgg and Ffug exist. The Fourier transform of the impulse response g of an LTI G is called its
frequency response and denoted G.j!/. Likewise,
Lfyg D Lfgg Lfug or Y.s/ D G.s/U.s/:
for all s in the RoCs of both G.s/ and U.s/. The Laplace transform of the impulse response g of an LTI
G is called its transfer function, denoted G.s/.
Properties of LTI systems can be analyzed in terms of their transfer functions. For example, an LTI
continuous-time system is causal and stable iff its transfer function G.s/ belongs to the space
Hp�m1 ´
˚
G W C0 ! Cp�m j G is holomorphic in C0 and kGk1 ´ sups2C0
kG.s/k < 1
; (B.17)
and kG.s/k stands for the matrix spectral norm on Cp�m. TheH1 system norm defined in (B.17) actually
equals the L2-induced norm of G, i.e. kGk1 D kGkL2!L2. Also, every G 2 H1 has a unique boundary
function QG 2 L1.jR/ such that QG.j!/ D lim�#0 G.� C j!/ for almost all ! and k QGk1 D kGk1, where
Lp�m1 .jR/ ´
˚ QG W jR ! Cp�m j k QGk1 ´ ess sup!2R
k QG.j!/k < 1
: (B.18)
It is customary to identify G with QG and regard H1 as a closed subspace of L1.jR/, in which case
kGk1 D ess sup!2R
kG.j!/k: (B.19)
It should be emphasized that this equality holds only for functions G 2 H1 as defined by (B.17).
B.3. STATE-SPACE TECHNIQUES 155
The counterpart of the Laplace transform for discrete sequences is the ´-transform, defined as
Zff g D F.´/ ´X
t2ZC
f Œt �´�t (B.20)
for all ´ 2 C for which the sum converges (RoC). The ´-transform of the impulse response g of a discrete-
time LSI system G is also called its transfer function, denoted G.´/. In full analogy with the continuous-
time case, the relation between the ´-transforms of the input and output to G reads Y.´/ D G.´/U.´/ in
the ´-domain and G is causal and stable iff its transfer function is holomorphic and bounded in the exterior
of the closed unit disk, i.e. in C n xD.
Transfer functions, in both the Laplace and ´-domains, can be expressed in terms of corresponding
spate-space realizations .A; B; C;D/ as
G.s/ D D C C.sI � A/�1B µ�A B
C D
�
´ D C C.´I � A/�1B D G.´/:
Transfer functions of finite-dimensional causal systems are rational functions of s and ´ and are proper, in
the sense that their limits for Re s ! 1 and j´j ! 1, respectively, exist. Finite-dimensional systems are
stable iff they are proper and all poles of their transfer functions are in the open left-half plane C n xC0 (or
in the open unit disk D in the discrete-time case). All these poles are among the eigenvalues of the “A”
matrix of a state-space realization of G, cf. the relation between i/o and Lyapunov’s asymptotic stability
notions.
B.3 State-space techniques
The transfer function of the parallel interconnection G1 CG2 is
�A1 B1
C1 D1
�
C�A2 B2
C2 D2
�
D
2
4
A1 0 B1
0 A2 B2
C1 C2 D1 CD2
3
5 : (B.21)
The transfer function of the cascade (series) interconnection G2G1 is
�A2 B2
C2 D2
� �A1 B1
C1 D1
�
D
2
4
A1 0 B1
B2C1 A2 B2D1
D2C1 C2 D2D1
3
5 D
2
4
A2 B2C1 B2D1
0 A1 B1
C2 D2C1 D2D1
3
5 ; (B.22)
The transfer function of the inverse system
�A B
C D
��1
D�A � BD�1C BD�1
�D�1C D�1
�
(B.23)
and it exists, as a proper transfer function, iff det.D/ ¤ 0.
156 APPENDIX B. BACKGROUND ON LINEAR SYSTEMS
Bibliography
[1] R. J. Anderson and M. W. Spong, “Bilateral control of teleoperators with time delay,” IEEE Trans.
Automat. Control, vol. 34, no. 5, pp. 494–501, 1989.
[2] Z. Artstein, “Linear systems with delayed control: A reduction,” IEEE Trans. Automat. Control,
vol. 27, no. 4, pp. 869–879, 1982.
[3] M. Athans and P. L. Falb, Optimal Control: An Introduction to the Theory and Its Applications. NY:
McGraw-Hill, 1966.
[4] R. Bellman and K. L. Cooke, Differential-Difference Equations. New York: Academic Press, 1963.
[5] C. Bonnet, A. R. Fioravanti, and J. R. Partington, “Stability of neutral systems with commensurate
delays and poles asymptotic to the imaginary axis,” SIAM J. Control Optim., vol. 49, no. 2, pp.
498–516, 2011.
[6] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, UK: Cambridge University Press,
2004.
[7] R. F. Curtain and K. Glover, “Robust stabilization of infinite dimensional systems by finite dimen-
sional controllers,” Syst. Control Lett., vol. 7, pp. 41–47, 1986.
[8] R. F. Curtain and K. Morris, “Transfer functions of distributed parameter systems: A tutorial,” Auto-
matica, vol. 45, no. 5, pp. 1101–1116, 2009.
[9] R. F. Curtain and H. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory. New
York: Springer-Verlag, 1995.
[10] R. Datko, “A procedure for determination of the exponential stability of certain differential-
difference equations,” Quart. Appl. Math., vol. 36, pp. 279–292, 1978.
[11] C. A. Desoer and M. Vidyasagar, Feedback Systems: Input-Output Properties. New York: Aca-
demic Press, 1975.
[12] Y. A. Fiagbedzi and A. E. Pearson, “Feedback stabilization of linear autonomous time lag systems,”
IEEE Trans. Automat. Control, vol. 31, no. 9, pp. 847–855, 1986.
[13] B. A. Francis, A Course in H1 Theory, ser. Lecture Notes in Control and Inform. Sci. New York:
Springer-Verlag, 1987, vol. 88.
[14] B. A. Francis and W. M. Wonham, “The internal model principle for linear multivariable regulators,”
Appl. Math. Opt., vol. 2, no. 2, pp. 170–412, 1975.
[15] E. Fridman, “New Lyapunov–Krasovskii functionals for stability of linear retarded and neutral type
systems,” Syst. Control Lett., vol. 43, pp. 309–319, 2001.
157
158 BIBLIOGRAPHY
[16] A. T. Fuller, “Optimal nonlinear control of systems with pure delay,” Int. J. Control, vol. 8, no. 2, pp.
145–168, 1968.
[17] T. Furukawa and E. Shimemura, “Predictive control for systems with time delay,” Int. J. Control,
vol. 37, no. 2, pp. 399–412, 1983.
[18] G. C. Goodwin, S. F. Graebe, and M. E. Salgado, Control System Design. Englewood Cliffs, NJ:
Prentice-Hall, 2000.
[19] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay Systems. Boston: Birkhauser, 2003.
[20] R. Gudin and L. Mirkin, “On the delay margin of dead-time compensators,” Int. J. Control, vol. 80,
no. 8, pp. 1316–1332, 2007.
[21] J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations. New York:
Springer-Verlag, 1993.
[22] T. Inoue, M. Nakano, and S. Iwai, “High accuracy control of servomechanism for repeated con-
touring,” in Proc. 10th Annual Symp. Incremental Motion Control Systems and Devices, Urbana–
Champaign, IL, 1981, pp. 285–292.
[23] N. F. Jerome and W. H. Ray, “High-performance multivariable control strategies for systems having
time delays,” AIChE Journal, vol. 32, no. 6, pp. 914–931, 1986.
[24] T. Kailath, Linear Systems. Englewood Cliffs, NJ: Prentice-Hall, 1980.
[25] C.-Y. Kao and B. Lincoln, “Simple stability criteria for systems with time-varying delays,” Automat-
ica, vol. 40, no. 8, pp. 1429–1434, 2004.
[26] C.-Y. Kao and A. Rantzer, “Stability analysis of systems with uncertain time-varying delays,” Auto-
matica, vol. 43, no. 6, pp. 959–970, 2007.
[27] V. L. Kharitonov, Time-Delay Systems: Lyapunov Functionals and Matrices. NY: Birkhauser, 2013.
[28] O. Kidron and O. Yaniv, “Robust control of uncertain resonant systems,” European J. Control, vol. 1,
no. 2, pp. 104–112, 1995.
[29] D. L. Kleinman, “Optimal control of linear systems with time-delay and observation noise,” IEEE
Trans. Automat. Control, vol. 14, no. 5, pp. 524–527, 1969.
[30] N. N. Krasovskiı, Stability of Motion: Applications of Lyapunov’s Second Method to Differential
Systems and Equations With Delay. Stanford, CA: Stanford Univ. Press, 1963.
[31] W. H. Kwon and A. E. Pearson, “Feedback stabilization of linear systems with delayed control,”
IEEE Trans. Automat. Control, vol. 25, no. 2, pp. 266–269, 1980.
[32] D. G. Luenberger, Optimization by Vector Space Methods. NY: John Wiley & Sons, 1969.
[33] A. Z. Manitius and A. W. Olbrot, “Finite spectrum assignment problem for systems with delay,”
IEEE Trans. Automat. Control, vol. 24, pp. 541–553, 1979.
[34] I. Masubuchi, Y. Kamitane, A. Ohara, and N. Suda, “H1 control for descriptor systems: A matrix
inequalities approach,” Automatica, vol. 33, no. 4, pp. 669–673, 1997.
BIBLIOGRAPHY 159
[35] D. Q. Mayne, “Control of linear systems with time delay,” Electronics Letters, vol. 4, no. 20, pp.
439–440, 1968.
[36] A. Megretski and A. Rantzer, “System analysis via integral quadratic constraints,” IEEE Trans. Au-
tomat. Control, vol. 42, no. 6, pp. 819–830, 1997.
[37] G. Meinsma and L. Mirkin, “H1 control of systems with multiple I/O delays via decomposition to
adobe problems,” IEEE Trans. Automat. Control, vol. 50, no. 2, pp. 199–211, 2005.
[38] G. Meinsma and H. Zwart, “On H1 control for dead-time systems,” IEEE Trans. Automat. Control,
vol. 45, no. 2, pp. 272–285, 2000.
[39] R. H. Middleton and D. E. Miller, “On the achievable delay margin using LTI control for unstable
plants,” IEEE Trans. Automat. Control, vol. 52, no. 7, pp. 1194–1207, 2007.
[40] D. E. Miller and D. E. Davison, “Stabilization in the presence of an uncertain arbitrarily large delay,”
IEEE Trans. Automat. Control, vol. 50, no. 8, pp. 1074–1089, 2005.
[41] L. Mirkin, “On the extraction of dead-time controllers and estimators from delay-free parametriza-
tions,” IEEE Trans. Automat. Control, vol. 48, no. 4, pp. 543–553, 2003.
[42] ——, “On the approximation of distributed-delay control laws,” Syst. Control Lett., vol. 55, no. 5,
pp. 331–342, 2004.
[43] ——, “Intermittent redesign of analog controllers via the Youla parameter,” IEEE Trans. Automat.
Control, vol. 62, no. 4, pp. 1838–1851, 2017.
[44] ——, “Linear Control Systems,” course notes, Faculty of Mechanical Eng., Technion—IIT, 2019.
[Online]. Available: http://leo.technion.ac.il/Courses/LCS/LCSnotes19.pdf
[45] L. Mirkin, Z. J. Palmor, and D. Shneiderman, “Dead-time compensation for systems with multiple
I/O delays: A loop shifting approach,” IEEE Trans. Automat. Control, vol. 56, no. 11, pp. 2542–2554,
2011.
[46] L. Mirkin and N. Raskin, “Every stabilizing dead-time controller has an observer-predictor-based
structure,” Automatica, vol. 39, no. 10, pp. 1747–1754, 2003.
[47] L. Mirkin, H. Rotstein, and Z. J. Palmor, “H2 and H1 design of sampled-data systems using lifting.
Part II: Properties of systems in the lifted domain,” SIAM J. Control Optim., vol. 38, no. 1, pp.
197–218, 1999.
[48] L. Mirkin and G. Tadmor, “Imposing FIR structure on H2 preview tracking and smoothing solu-
tions,” SIAM J. Control Optim., vol. 48, no. 4, pp. 2433–2460, 2009.
[49] S.-I. Niculescu and W. Michiels, “Stabilizing a chain of integrators using multiple delays,” IEEE
Trans. Automat. Control, vol. 49, no. 4, pp. 802–807, 2004.
[50] M. Nordin and P.-O. Gutman, “Digital QFT design for the benchmark problem,” European J. Con-
trol, vol. 1, no. 2, pp. 97–103, 1995.
[51] J. E. Normey-Rico and E. F. Camacho, Control of Dead-time Processes. London: Springer-Verlag,
2007.
[52] N. Olgac and B. T. Holm-Hansen, “A novel active vibration absorption technique: delayed res-
onator,” J. Sound Vibration, vol. 176, no. 1, pp. 93–104, 1994.
160 BIBLIOGRAPHY
[53] D. H. Owens and A. Raya, “Robust stability of Smith predictor controllers for time-delay systems,”
IEE Proc. – Control Theory Appl., vol. 129, no. 6, pp. 298–304, 1982.
[54] A. Packard and J. Doyle, “The complex structured singular value,” Automatica, vol. 29, no. 1, pp.
71–109, 1993.
[55] P. Park, “A delay-dependent stability criterion for systems with uncertain time-invariant delays,”
IEEE Trans. Automat. Control, vol. 44, no. 4, pp. 876–877, 1999.
[56] J. R. Partington, Linear Operators and Linear Systems: An Analytical Approach to Control Theory.
Cambridge, UK: Cambridge University Press, 2004.
[57] J. R. Partington and C. Bonnet, “H1 and BIBO stabilization of delay systems of neutral type,” Syst.
Control Lett., vol. 52, no. 8, pp. 283–288, 2004.
[58] J. R. Partington and P. M. Makila, “Rational approximation of distributed-delay controllers,” Int. J.
Control, vol. 78, no. 16, pp. 1295–1301, 2005.
[59] R. Rabah, G. M. Sklyar, and A. V. Rezounenko, “Stability analysis of neutral type systems in Hilbert
space,” J. Differential Equations, vol. 214, no. 2, pp. 391–428, 2005.
[60] B. S. Razumikhin, “On the stability of delay systems,” Prikl. Mat. i Meh., vol. 20, no. 4, pp. 500–512,
1956, (in Russian).
[61] Z. V. Rekasius, “A stability test for systems with delays,” in Proc. 1980 Joint Automatic Control
Conf., vol. II, San Francisco, CA, 1980, p. TP9 A.
[62] R. D. Robinett, B. J. Petterson, and J. C. Fahrenholtz, “Time-domain validation for sample-data
uncertainty models,” J. Intell. Robot. Syst., vol. 21, pp. 277–285, 1998.
[63] W. Rudin, Real and Complex Analysis, 3rd ed. New York: McGraw-Hill, 1987.
[64] A. Seuret, F. Gouaisbaut, and L. Baudouin, “D1.1 - Overview of Lyapunov methods for
time-delay systems,” LAAS-CNRS, Research Report no. 16308, Sep. 2016. [Online]. Available:
https://hal.archives-ouvertes.fr/hal-01369516
[65] J. S. Shamma, “Robust stability with time-varying structured uncertainty,” IEEE Trans. Automat.
Control, vol. 39, no. 4, pp. 714–724, 1994.
[66] W. Singhose, “Command shaping for flexible systems: A review of the first 50 years,” Int. J. Precis.
Eng. Manuf., vol. 10, no. 4, pp. 153–168, 2009.
[67] O. J. M. Smith, “Closer control of loops with dead time,” Chem. Eng. Progress, vol. 53, no. 5, pp.
217–219, 1957.
[68] ——, “Posicast control of damped oscillatory systems,” Proc. IRE, vol. 45, no. 9, pp. 1249–1255,
1957.
[69] I. H. Suh and Z. Bien, “Proportional minus delay controller,” IEEE Trans. Automat. Control, vol. 24,
no. 2, pp. 370–372, 1979.
[70] P. K. S. Tam and J. B. Moore, “Stable realization of fixed-lag smoothing equations for continuous-
time signals,” IEEE Trans. Automat. Control, vol. 19, no. 1, pp. 84–87, 1974.
BIBLIOGRAPHY 161
[71] O. Troeng and L. Mirkin, “Toward a more efficient implementation of distributed-delay elements,”
in Proc. 52nd IEEE Conf. Decision and Control, Florence, Italy, 2013, pp. 294–299.
[72] Y. Z. Tsypkin, “Stability of systems with delayed feedback,” Avtomatika i Telemekhanica, vol. 7, no.
2–3, pp. 107–129, 1946.
[73] V. Van Assche, M. Dambrine, J.-F. Lafay, and J.-P. Richard, “Some problems arising in the im-
plementation of distributed-delay control laws,” in Proc. 38th IEEE Conf. Decision and Control,
Phoenix, AZ, 1999, pp. 4668–4672.
[74] J. Veenman and C. W. Scherer, “IQC-synthesis with general dynamic multipliers,” Int. J. Robust and
Nonlinear Control, vol. 24, no. 17, pp. 3027–3056, 2014.
[75] K. Walton and J. E. Marshall, “Direct method for TDS stability analysis,” IEE Proc. – Control Theory
Appl., vol. 134, pp. 101–107, 1987.
[76] Z. Q. Wang, P. Lundstrom, and S. Skogestad, “Representation of uncertain time delays in the H1framework,” Int. J. Control, vol. 59, no. 3, pp. 627–638, 1994.
[77] K. Watanabe and M. Ito, “A process-model control for linear systems with delay,” IEEE Trans.
Automat. Control, vol. 26, no. 6, pp. 1261–1269, 1981.
[78] B. Wie and K.-W. Byun, “New generalized structural filtering concept for active vibration control
synthesis,” J. Guidance, Control, and Dynamics, vol. 12, no. 2, pp. 147–154, 1989.
[79] J. C. Willems, The Analysis of Feedback Systems. Cambridge, MA: The MIT Press, 1971.
[80] S. Xu and J. Lam, “On equivalence and efficiency of certain stability criteria for time-delay systems,”
IEEE Trans. Automat. Control, vol. 52, no. 1, pp. 95–101, 2007.
[81] J. Zhang, C. R. Knospe, and P. Tsiotras, “Stability of time-delay systems: Equivalence between
Lyapunov and scaled small-gain conditions,” IEEE Trans. Automat. Control, vol. 46, no. 3, pp. 482–
486, 2001.
[82] X.-M. Zhang, Q.-L. Han, A. Seuret, F. Gouaisbaut, and Y. He, “Overview of recent advances in
stability of linear systems with time-varying delays,” IET Control Theory Appl., vol. 13, no. 1, pp.
1–16, 2018.
Index
Characteristic function, 21
Crossover frequency, 27
Dead-time compensator, 44
FASP, 82
implementation
lumped-delay approximation, 96
reset mechanism, 91
MSP, 45
Smith predictor, 43
Dead-time systems, 4
Delay-differential equation, 11
neutral, 21
retarded, 21
Delay element
continuous-time, 2
frequency response, 3
impulse response, 3
state, 2
transfer function, 3
discrete-time, 1
impulse response, 1
state, 1
state-space realization, 2
transfer function, 1
distributed, 6
input adobe, 79
multiple, 5
Delay margin, 103
Equation
algebraic Riccati
H2, 69
H1, 70
stabilizing solution, 69, 70
Lyapunov, 149
Sylvester, 149
Feedforward-action Smith predictor (FASP), 82
Finite spectrum assignment, 46
FIR completion operator, 60
FIR truncation operator, 60
Left characteristic matrix equation, 54
Leibniz integral rule, 38
Loop shifting
H2 performance, 77
internal stability, 59
Lumped-delay approximation, 96
Lyapunov–Krasovskii functional, 38
Lyapunov function, 36
quadratic, 36
Matrix
Hurwitz, 149
Modified Smith predictor (MSP), 45
Nehari extension problem, 83
Norm
H2 (system norm), 68
H1 (system norm), 154
L2 (signal norm), 151
`2 (signal norm), 153
Observer-predictor, 47, 53
Pade approximant, 13
multipoint, 93
Posicast control, 127
Problem
standard, 67
Quasi-polynomial, 9, 21
advanced, 21
neutral, 21
retarded, 21
Repetitive control, 142
Robust Stability Theorem, 110
Roots of quasi-polynomials
asymptotic properties
neutral chains, 23
retarded chains, 23
162
INDEX 163
migration
reversal, 27
switch, 27
stability analysis
bilinear (Rekasius) transformation, 33
delay sweeping (direct method), 27
Roth’s removal rule, 149
Schur complement, 147
Smith controller
predictor, 43
primary controller, 43
Stability
delay-independent, 103
internal, 59
L2, 151
`2, 153
Lyapunov, 152
State vector, 152
discrete, 153
System
causal, 151, 153
convolution representation, 152, 153
i/o stable, 151, 153
Lyapunov stable, 152
asymptotically, 152
shift invariant, 153
state vector, 152
discrete, 153
time invariant, 152
transfer function, 154
Transfer function, 154
Transform
´ (one-sided), 155
continuous-time Fourier, 154
Laplace (one-sided), 154
Wave equation, 7
Well posedness of feedback, 58