Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Model-based Predictive Control -
project
MbPC
S.l. dr. ing. Constantin Florin Caruntu
www.ac.tuiasi.ro/~caruntuc
State-space
model predictive controlOutline
5. Introduction in SS-MPC
6. State-space MPC
7. Stability analysis
8. Robustness analysis
13.01.2014
15:23
State-space
model predictive control
Lecture 5 – Introduction in SS-MPC
� Sample period is T. Sample number is k. Time is t.
� s(t)→ continuous time signal
� s(kT) → discrete time signal (s(k))
13.01.2014
15:23 5.1. State-space models5. Introduction in SS-MPC
Digital control
� A linear continuous time (CT) state-space system:
� → state vector
� → input vector (manipulated variables)
� → output vector (measured variables)
� → controlled variables
� In many cases, H=C, so that y=z
13.01.2014
15:23
c c
c c
c
x A x B u
y C x D u
z H x
= +
= +
=
&
nx∈�mu∈�py∈�qz∈�
5.1. State-space models5. Introduction in SS-MPC
Continuous-time
� With a ZOH at the output of the controller:
u(t) = u(kT) for the interval kT ≤ t ≤ ( k+1)T
� The discrete time (DT) state-space model
is an exact representation of the sampled CT system if
13.01.2014
15:23
( ) ( ) ( )( ) ( ) ( )( ) ( )
x kT T Ax kT Bu kT
y kT Cx kT Du kT
z kT Hx kT
+ = +
= +
=
( )0,c cT
A T A
cA e B e d Bτ τ= = ∫
5.1. State-space models5. Introduction in SS-MPC
Discrete-time
� With a ZOH at the output of the controller an exact
representation of the CT system can be obtained if:
• the CT system is linear, or
• the CT system is linear with input saturation
� It is generally not possible to get an exact DT representation
for a nonlinear CT system:
� For a nonlinear model, a DT approximation or numerical
integration can be used to find x(kT)
13.01.2014
15:23
( )( ),
,
x f x u
y h x u
=
=
&
5.1. State-space models5. Introduction in SS-MPC
Discrete-time
� Definition (stabilizability): The matrix pair (A, B) is stabilizable if
there exists a matrix K such that (A+BK) is stable.
� Definition (detectability): The matrix pair (C, A) is detectable if there
exists a matrix L such that (A+LC) is stable.
Properties testing
� Let be the set of eigenvalues on or outside the unit circle:
� Proposition (stabilizability): The pair (A, B) is stabilizable if and only if
is a full row rank for all
� Proposition (detectability): The pair (C, A) is detectable if and only if
is full column rank for all
13.01.2014
15:23
Λ
( ) ( ){ }: : 1i iA Aλ λΛ = ≥
( )( )A I Bλ− λ∈Λ
A I
C
λ−
λ∈Λ
5.1. State-space models5. Introduction in SS-MPC
Stabilizability and detectability
� Advantages:
� Flexible process modeling
• multivariable
• linear or nonlinear
• deterministic, stochastic or fuzzy
� Incorporates constraints on states and inputs
• physical, safety, environmental, economical
� Optimal performances in closed loop
• horizon dependent, cost function optimization
� Disadvantages:
� Requires online optimization
• nonlinear/uncertain processes → computational power
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Motivation
� All physical systems have constraints:
• Physical constrains
(actuator limits)
• Performance constraints
(overshoot)
• Safety constraints
(temperature/pressure limits)
� Optimal operating points are often near constraints
� Most control methods address constraints a posteriori:
• Anti-windup methods, “trial and error”
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Constraints
� Classical control:
• Does not take constraints into
account
• Reference far from constaints
• Suboptimal operation
� Predictive control:
� Takes constraints into account
from the design phase
� Reference close to constraints
� Improved operation
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Optimal operation and constraints
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Receding horizon principle
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Receding horizon principle
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Receding horizon principle
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Receding horizon principle
� Predictive control receding horizon principle
� At each sampling period, the predictive controller:
1) Takes a measurement of the system state/output
2) Computes a finite horizon control sequence that:
a) Uses an internal model to predict system behaviour
b) Minimizes some cost function
c) Doesn’t violate any constraint
3) Implements the first part of the optimal sequence
� => This is a feedback control law
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Summary
� Is it a new idea ?
• NO – standard optimal control with finite horizon
• YES – on-line optimization
� Main problems
• The optimization has to be fast enough
• Satisfying constraints for infinite horizon
• The resulting control law may not be stable
� Advantages:
• Systematic method of considering the constraints
• Flexible performance specifications
13.01.2014
15:23 5.2. State-space MPC strategy5. Introduction in SS-MPC
Properties
Model-based Predictive Control -
project
MbPC
S.l. dr. ing. Constantin Florin Caruntu
www.ac.tuiasi.ro/~caruntuc
State-space
model predictive controlOutline
5. Introduction in SS-MPC
6. State-space MPC
7. Stability analysis
8. Robustness analysis
13.01.2014
15:23
State-space
model predictive control
Lecture 6 – State-space MPC
� DT system
� Assumptions:
• (A,B) is stabilizable and (C,A) detectable
• C = I => state feedback
• H = C => all outputs/states are controlled variables
o Goal is to regulate the states around the origin
o No delays, disturbances, model errors, noise
13.01.2014
15:23
( ) ( ) ( )( ) ( )( ) ( )
1x k Ax k Bu k
y k Cx k
z k Hx k
+ = +
=
=
6.1. Unconstrained predictive control6. State-space MPC
Assumptions
� Given the DT system model
the problem is to design a state feedback control law
u(k) = Kx(k) such that the origin of the closed-loop system
is globally asymptotically stable => requires that (A+BK) is stable
13.01.2014
15:23
( ) ( ) ( )1x k Ax k Bu k+ = +
( ) ( ) ( )1x k A BK x k+ = +
6. State-space MPC 6.1. Unconstrained predictive control
State-space control
� Problem: Given an initial state x(0) at time k=0, compute
and implement an input sequence
that minimizes the infinite horizon cost function
• The state weight penalizes non-zero states
• The input weight penalizes non-zero inputs
• Generally Q and R are diagonal and positive definite
13.01.2014
15:23
( ) ( ){ }0 , 1 ,...,u u
( ) ( ) ( ) ( )( )0
T T
k
x k Qx k u k Ru k∞
=
+∑
0Q =f
0R =f
6. State-space MPC 6.1. Unconstrained predictive control
LQR
� The infinite horizon LQR problem has an infinite
number of decision variables
� A simple and closed form solution exists if
• Q is positive semidefinite ( )
• R is positive definite ( )
• The pair is detectable
� A finite horizon version of the LQR problem with the
same assumptions as above will be solved for use in MPC
13.01.2014
15:23
( ) ( ){ }0 , 1 ,...,u u
0Q =f
0R f1
2 ,Q A
6. State-space MPC 6.1. Unconstrained predictive control
LQR
13.01.2014
15:23
1. Obtain measurements of current state x.
6. State-space MPC 6.1. Unconstrained predictive control
Receding horizon principle
13.01.2014
15:23
1. Obtain measurements of current state x.
2. Compute optimal finite horizon input sequence ( ) ( ) ( ){ }* * *0 1 1, ,..., Nu x u x u x−
6. State-space MPC 6.1. Unconstrained predictive control
Receding horizon principle
13.01.2014
15:23
1. Obtain measurements of current state x.
2. Compute optimal finite horizon input sequence
3. Implement first part of optimal input sequence
( ) ( ) ( ){ }* * *0 1 1, ,..., Nu x u x u x−( ) ( )*0:k x u x=
6. State-space MPC 6.1. Unconstrained predictive control
Receding horizon principle
13.01.2014
15:23
1. Obtain measurements of current state x.
2. Compute optimal finite horizon input sequence
3. Implement first part of optimal input sequence
4. Return to step 1.
( ) ( ) ( ){ }* * *0 1 1, ,..., Nu x u x u x−( ) ( )*0:k x u x=
6. State-space MPC 6.1. Unconstrained predictive control
Receding horizon principle
� Problem: Given an initial state x = x(k), compute a finite
horizon input sequence
that minimizes the finite horizon cost function
where
� V(·) is a function of the initial state x and the first N inputs
and not the time index k or the predicted states
13.01.2014
15:23
{ }0 1 1, ,..., Nu u u −
( )( ) ( )1
0 1
0
, ,...,N
T T T
N N N i i i i
i
V x u u x Px x Qx u Ru−
−=
= + +∑
0
1 , 0,1,..., 1i i i
x x
x Ax Bu i N+
=
= + = −
iu
ix
6. State-space MPC 6.1. Unconstrained predictive control
Finite horizon optimal control
� Terminology
• The vector is the prediction of given the current
state and the inputs for all
• is the control horizon
• The matrix is the terminal weight, with
� The stability and performance of a receding horizon
control law based on this problem is determined by
the parameters Q, R, P and N
13.01.2014
15:23
ix ( )x k i+( )x k ( ) iu k i u+ = 0,1,..., 1i N= −
N ∈�n nP ×∈� 0P =f
6. State-space MPC 6.1. Unconstrained predictive control
Finite horizon optimal control
� Note that , and that is
known.
� Can define stacked output and controlled
variables in a similar way.
� Define the stack vectors and as
13.01.2014
15:23
andm ni iu x∈ ∈� �
NmU ∈� NnX ∈�
0
1
2
1
: ,
N
u
u
U u
u −
=
M
1
2
3: ,
N
x
x
X x
x
=
M
( )0x x x k= =
NpY ∈�NqZ ∈�
6. State-space MPC 6.1. Unconstrained predictive control
Notations
� The cost function is defined as
� The value function is defined as
� The optimal input sequence is defined as
13.01.2014
15:23
( ) ( )1
0
,N
T T T
N N i i i i
i
V x U x Px x Qx u Ru−
=
= + +∑
( )* min ( , )U
V x V x U=
( )
( ) ( ) ( ){ }
*
* * *
0 1 1
: argmin ( , )
: , ,...,
U
N
U x V x U
u x u x u x−
=
=
6. State-space MPC 6.1. Unconstrained predictive control
Notations
� Compute prediction matrices and such that
� Rewrite the const function V(·) in terms of x and U
� Compute the gradient
� Set and solve for
� The MPC control law is the first part of the optimal
input sequence
� When there are no constraints, it is possible to do this
analytically.
13.01.2014
15:23
X x U= Φ +Γ
Φ Γ
( ),UV x U∇( ), 0UV x U∇ = ( )*U x
( ) ( ) ( )* *0 0 0mu x I U x= L
6. State-space MPC 6.1. Unconstrained predictive control
Control law design
� Want to find matrices and such that
13.01.2014
15:23
Φ Γ
1 0 0
2 1 1
x Ax Bu
x Ax Bu
= +
= +
M
X x U= Φ +Γ
6. State-space MPC 6.1. Unconstrained predictive control
Construction of prediction matrices
13.01.2014
15:23
1 0 0
2 1 1
x Ax Bu
x Ax Bu
= +
= +
M
( )1 0 0
2
2 0 0 1 0 0 1
1
0 0 2 1
N N
N N N
x Ax Bu
x A Ax Bu Bu A x ABu Bu
x A x A Bx ABu Bu− − −
= +
= + + = + +
= + + + +
M
L
X x U= Φ +Γ
� Want to find matrices and such thatΦ Γ
6. State-space MPC 6.1. Unconstrained predictive control
Construction of prediction matrices
� Collect terms to get the matrix form
� Recalling that , the prediction matrices and
are:
13.01.2014
15:23
1 0
2
2 1
0
1 2
1
0 0
0
N N N
N N
x uA B
x uA AB Bx
x uA A B A B B− − −
= +
L
L
M MM M M O M
L
0:x x= Φ Γ
2
1 2
0 0
0: , :
N N N
A B
A AB B
A A B A B B− −
Φ = Γ =
L
L
M M M O M
L
6. State-space MPC 6.1. Unconstrained predictive control
Construction of prediction matrices
� Recall that the cost function is
� Recalling , these can be rewritten in matrix
form as:
� Note that:
13.01.2014
15:23
0:x x=
( ) ( )1
0
1 1 0 0
2 2 1 1
0 0
1 1
, :N
T T T
N N i i i i
i
T T
T
N N N N
V x U x Px x Qx u Ru
x x u uQ R
x x u uQ Rx Qx
Q R
x x u uP R
−
=
− −
= + +
= + +
∑
M M M M
( ), T T TV x U x Qx X X U U= + Ω + Ψ
0 and 0 0
0 0
P Q
R
= = ⇒Ω =
⇒Ψ
f f f
f f
6. State-space MPC 6.1. Unconstrained predictive control
Construction of cost function
� Recall that
13.01.2014
15:23
( ), T T TV x U x Qx X X U UX x U
= + Ω + Ψ
= Φ +Γ
( ) ( ) ( )
( ) ( )
,
2
TT T
T T T T T T
T T T T
T T T T T T
V x U x Qx x U x U U U
x Qx x x U U U U
x U U x
x Q x U U U x
= + Φ +Γ Ω Φ +Γ + Ψ
= + Φ ΩΦ + Γ ΩΓ + Ψ
+ Φ ΩΓ + Γ ΩΦ
= +Φ ΩΦ + Ψ +Γ ΩΓ + Γ ΩΦ
6. State-space MPC 6.1. Unconstrained predictive control
Construction of cost function
� Recall that
where
� Important: this is a convex and quadratic form of U.
• The unique and global minimum occurs at the point where
• The optimal input sequence is therefore
13.01.2014
15:23
( ) ( )1,2
T T T TV x U U GU U Fx x Q x= + + +Φ ΩΦ
( ) ( ): 2 0, 0 si 0: 2
T
T
G
F
= Ψ +Γ ΩΓ Ω = Ψ
= Γ ΩΦ
f f f
( ), 0UV x U GU Fx∇ = + =
( )* 1U x G Fx−= −
6. State-space MPC 6.1. Unconstrained predictive control
Finding the solution
� The optimal input sequence is
� The MPC control law is defined by the first part of
� Define
such that .
o This is a time invariant linear control law
o It approximates the optimal infinite horizon control law
13.01.2014
15:23
( )* 1U x G Fx−= −
( )*U x
( ) ( ) ( )* *0 0 0mu x I U x= L
( ) 10 0MPC mK I G F−= − LMPCu K x=
6. State-space MPC 6.1. Unconstrained predictive control
Predictive control law
� The solution to the infinite horizon LQR problem is
where
and P is the solution to the Algebraic Riccati Equation
� If the terminal weight P in the finite cost function V(·)
is a solution to the Algebraic Riccati Equation above,
then:
13.01.2014
15:23
( ) ( )LQRu k K x k=
( ) 1T TLQRK B PB R B PA−
= − +
( ) 1T T T TP A PA A PB B PB R B PA Q−= − + +
MPC LQRK K=
6. State-space MPC 6.1. Unconstrained predictive control
Equivalence between LQR and MPC
� In practice, system variables are always constrained by:
• Physical limitations
� Input constraints (actuator limits)
� typically active during transients
� State constraints (reservoir capacities)
� can be active during transients or in steady state
• Safety considerations (critical temperatures/pressures)
• Performance specifications (limit overshoot)
13.01.2014
15:23 6. State-space MPC 6.2. Constrained predictive control
Variables constraints
13.01.2014
15:23
1
2
3
4
5
1 4
1.5 4
1.5 3
2 3
2 3
y
y
y
y
y
≤ ≤
≤ ≤
≤ ≤
≤ ≤
≤ ≤
� Clasify constraints (hard or soft):
� Hard constraints must be satisfied at all the times,
otherwise the problem is infeasible
� Soft constraints can be violated to avoid infeasibility
6. State-space MPC 6.2. Constrained predictive control
Variables constraints
� A common system nonlinearity is input saturation.
� Easily transformed into a constraint on a linear system:
The component of is:
13.01.2014
15:23
( ) ( ) ( )( ) ( )( ) ( )
1 sat nonlinearx k Ax k B u k
y k Cx k
+ = +
=
( )th 1,...,i i m∈ ( )sat u
( ){ }{ } { } { }
{ } { } { } { }
{ } { } { }
if
sat : if
if
i i i
i i i ii
i i i
u u u
u u u u u
u u u
<
= ≤ ≤
>
6. State-space MPC 6.2. Constrained predictive control
Systems with saturated input
� Sub-optimal methods to consider saturation:
� Saturate the unconstrained control law
(the constraints are ignored in the design phase of the
controller)
� Re-design an unconstrained control law
(increasing the weighting factor for u in the performance
criterion for optimal control)
�Anti-windup strategies
(dynamically limit the controller state -> I component)
13.01.2014
15:23 6. State-space MPC 6.2. Constrained predictive control
Constraint handling
� Problem: Given an initial state x(0) at time k=0, compute
and implement an input sequence
that minimizes the infinite horizon cost function
while guaranteeing that constraints are satisfied for all time.
� it is usually impossible to solve this problem
� predictive control provides and approximate solution
� the MPC laws with constraints will be nonlinear
13.01.2014
15:23
( ) ( ){ }0 , 1 ,...,u u
( ) ( ) ( ) ( )( )0
T T
k
x k Qx k u k Ru k∞
=
+∑
6. State-space MPC 6.2. Constrained predictive control
LQR problem with constraints
� Problem: Given an initial state x = x(k), compute a finite
horizon input sequence
that minimizes the finite horizon cost function
where
while guaranteeing all constraints are satisfied over the
prediction horizon
13.01.2014
15:23
{ }0 1 1, ,..., Nu u u −
( )1
0
NT T T
N N i i i i
i
x Px x Qx u Ru−
=
+ +∑
0
1 , 0,1,..., 1i i i
x x
x Ax Bu i N+
=
= + = −
0,1,...,i N=
6. State-space MPC 6.2. Constrained predictive control
Optimal control with finite horizon and constraints
13.01.2014
15:23
1. Obtain measurement of current output/state.
6. State-space MPC 6.2. Constrained predictive control
Receding horizon principle
13.01.2014
15:23
1. Obtain measurement of current output/state.
2. Compute optimal finite horizon input sequence subject to constraints.
6. State-space MPC 6.2. Constrained predictive control
Receding horizon principle
13.01.2014
15:23
1. Obtain measurement of current output/state.
2. Compute optimal finite horizon input sequence subject to constraints.
3. Implement first part of optimal input sequence.
6. State-space MPC 6.2. Constrained predictive control
Receding horizon principle
13.01.2014
15:23
1. Obtain measurement of current output/state.
2. Compute optimal finite horizon input sequence subject to constraints.
3. Implement first part of optimal input sequence.
4. Return to step 1.
6. State-space MPC 6.2. Constrained predictive control
Receding horizon principle
� Recall that the sequence of predicted states X was
solved in terms of the stacked inputs U
or, defining ,
� The matrices and are prediction matrices.
13.01.2014
15:23
1 0
2
2 1
0
1 2
1
0 0
0
N N N
N N
x uA B
x uA AB Bx
x uA A B A B B− − −
= +
L
L
M MM M M O M
L
0:x x=
Φ Γ
.X x U= Φ +Γ
6. State-space MPC 6.2. Constrained predictive control
Prediction matrices
� Now incorporate a set of linear inequality constraints
on the predicted states and inputs
� Many constraints take this form:
• input constraints only
• state constraints only
• Can include constraints on outputs or controlled variables.
� For simplicity, assume that
13.01.2014
15:23
ix iu
, for all 0,1,..., 1
.
i i i i i
N N N
M x E u b i N
M x b
+ ≤ = −
≤
0sM = ⇒
0sE = ⇒
, si , for all 0,1,..., 1i i iE E M M b b i N= = = = −
6. State-space MPC 6.2. Constrained predictive control
Incorporating constraints
� Suppose that there are the following input and output
constraints:
� Recalling that this is equivalent to:
� Similar expression for terminal constraint (in terms of only).
13.01.2014
15:23
, 0,1,..., 1
, 0,1,...,
low i high
low i high
u u u i N
y y y i N
≤ ≤ = −
≤ ≤ =
,i iy Cx=
0
0, 0,1,..., 1
0
0
low
high
i i
low
high
uI
uIx u i N
yC
yC
−− + ≤ = − −−
Nx
6. State-space MPC 6.2. Constrained predictive control
Incorporating constraints
� From the previous example, the constraints can be
written in the form:
by defining:
and
13.01.2014
15:23
0
0: , : , : , 0,1,..., 1
0
0
low
high
i i i
low
high
uI
uIM E b i N
yC
yC
−− = = = = − −−
, for all 0,1,..., 1
.
i i i i i
N N N
M x E u b i N
M x b
+ ≤ = −
≤
: , :low
N N
high
yCM b
yC
−− = =
6. State-space MPC 6.2. Constrained predictive control
Incorporating constraints
� Taking all of the constraints together:
� By appropriately defining yields:
� Next, X will be eliminated by using the prediction
matrices.
13.01.2014
15:23
00 0
1 0
1 1
0
1
1
0 0 0
00
0
00 0 0
N
N N
N N
bM Ex u
M bx
Ex u
M b
−−
+ + ≤
L L
L M O MM M
M O M MM L
L L
( )0, , and :c x xε∆ ϒ =
.x X U cε∆ + ϒ + ≤
6. State-space MPC 6.2. Constrained predictive control
Incorporating constraints
� Substitute in
and collect terms. The constraints can be rewritten as:
where
and
� The constraints are now in terms of the input
sequence U and the initial state
13.01.2014
15:23
X x U= Φ +Γ
x X U cε∆ + ϒ + ≤
JU c Wx≤ +
:J ε= ϒΓ +
:W = −∆ − ϒΦ
( )0 .x x x k= =
6. State-space MPC 6.2. Constrained predictive control
Incorporating constraints
� In summary, the basic procedure is:
� Define linear inequalities in
� Write the constraints in the form:
� Stack the constraints to get them in the form:
� Substitute and rearrange to the form:
13.01.2014
15:23
X x U= Φ +Γ
.x X U cε∆ + ϒ + ≤
.JU c Wx≤ +
, , and i i i iu x y z
, for all 0,1,..., 1
.
i i i i i
N N N
M x E u b i N
M x b
+ ≤ = −
≤
6. State-space MPC 6.2. Constrained predictive control
Incorporating constraints
� Recall that the cost function
can be rewritten (with ) as
for some matrices defined previously.
� Remember that
13.01.2014
15:23
( )1
0
( , ) :N
T T T
N N i i i i
i
V x U x Px x Qx u Ru−
=
= + +∑
0:x x=
( ) ( )1,2
T T T TV x U U GU U Fx x Q x= + + +Φ ΩΦ
, and F G Ω
0 if 0, 0 and 0.G P Q R= =f f f f
6. State-space MPC 6.2. Constrained predictive control
Cost function
� Definition (quadratic problem - QP):
Given the matrices Q and A and the vectors c and b, the
optimization problem
is called a quadratic problem.
� Propozition If in the above quadratic problem, then:
1. The optimization problem is strictly convex.
2. A global minimum can be always found.
3. The global minimum is unique.
13.01.2014
15:23
1min
2
subject to
T TQ c
A b
θθ θ θ
θ
+
≤
0Q f
6. State-space MPC 6.2. Constrained predictive control
Quadratic problems (QP)
13.01.2014
15:23 6. State-space MPC 6.2. Constrained predictive control
Solution of quadratic problems
13.01.2014
15:23
� Many problems can be recasted as quadratic problems
� Example: Nonlinear constraints.
� Example: Nonlinear cost function.
{ }
1min
2
subject to max ,
T T
T T
Q c
e f b
θθ θ θ
θ θ
+ ≤
1min
2
subject to ,
T T
T T
Q c
e b f b
θθ θ θ
θ θ
+⇒
≤ ≤
min
subject to
Tc
A b
θθ
θ
≤
,min
subect to
0
0
T
T
A b
c
c
θ δδ
θ
θ δ
θ δ
⇒
≤
− ≤
− − ≤
6. State-space MPC 6.2. Constrained predictive control
Writing problems as QP
� The constrained optimal control problem is
� This is a quadratic problem in its standard form.
� The optimal solution is:
1. A global minimum (when ).
2. Unique (when ).
13.01.2014
15:23
1min
2
subject to
T T
UU GU U Fx
JU c Wx
+
≤ +
( )
U Q G c Fx
A J b c Wx
θ → → →→ → +
0G =f
0G f
6. State-space MPC 6.2. Constrained predictive control
Constrained optimal control as QP
� Some QP parameters are dependent on the current state x
• Without constraints, is a linear function of x.
• With constraints, is a nonlinear function.
� must be calculated by solving an online quadratic
problem for every x.
� In Matlab, the quadratic problem can be solved by using
13.01.2014
15:23
( )*U x( )*U x
( )*U x
( )U=quadprog G,F* x,J,C+Wx
6. State-space MPC 6.2. Constrained predictive control
Solution using QP
� The MPC law is the first part of the optimal input
sequence
� Since is no longer linear, is a
nonlinear control law.
� The dynamics of the closed loop system are nonlinear
13.01.2014
15:23
( ) ( ) ( )* *0: 0 0MPC mK u x I U x= = L
( )*U x : n mMPCK →� �
( ) ( ) ( )1 MPCx k Ax k BK x+ = +
6. State-space MPC 6.2. Constrained predictive control
MPC implementation
� Example: Duble integrator
Constraints:
Prediction horizon=12.
Quadratic cost function:
13.01.2014
15:23
( ) ( ) ( )1 1 0
10 1 1
x k x k u k
+ = +
1, 12u x∞
≤ ≤
1 0, 1, 1
0 1Q R P
= = =
� Solution: Controller with
57 regions. Each region i
has ( )*0 i iu x v K x= +
6. State-space MPC 6.2. Constrained predictive control
Solutions complexity
Model-based Predictive Control -
project
MbPC
S.l. dr. ing. Constantin Florin Caruntu
www.ac.tuiasi.ro/~caruntuc
State-space
model predictive controlOutline
5. Introduction in SS-MPC
6. State-space MPC
7. Stability analysis
8. Robustness analysis
13.01.2014
15:23
State-space
model predictive control
Lecture 7 – Stability analysis
� Problem: Given the initial state x = x(k), compute a finite
horizon input sequence
that minimizes the finite horizon cost function
where
� Assume that are positive definite.
13.01.2014
15:23
{ }0 1 1, ,..., Nu u u −
( ) ( )1
0
, :N
T T T
N N i i i i
i
V x U x Px x Qx u Ru−
=
= + +∑
0
1 , 0,1,..., 1i i i
x x
x Ax Bu i N+
=
= + = −
, and Q R P
7. Stability analysis 7.1. Introduction
Optimal control with finite horizon with constraints
� The value function is defined as
� The optimal input sequence is defined as
� Define the predicted state trajectory
where:
� The first part of defines a MPC law
13.01.2014
15:23
( )* min ( , )U
V x V x U=
( ) ( ) ( ) ( ){ }* * * *0 1 1: argmin ( , ) : , ,..., NU
U x V x U u x u x u x−= =
( ) ( ) ( )* * *0 1, ,..., Nx x x x x x( )( ) ( ) ( )
*
0
* * *
1 , 0,..., 1i i i
x x x
x x Ax x Bu x i N+
=
= + = −
( )*U x ( ) ( )*0MPCK x u x=
7. Stability analysis 7.1. Introduction
Notations
� Consider a discrete time system
with continuous and .
o Definition (Lyapunov function): A continuous function
defined on a region containing the
origin in its interior is called a Lyapunov function if
1.
2.
3.
13.01.2014
15:23
( ) ( )( )1x k f x k+ =: n nf →� � ( )0 0f =
:V S R→ nS ⊂ �
( )0 0V =
( ) 0, , with 0V x x S x> ∀ ∈ ≠( )( ) ( ) 0,V f x V x x S− ≤ ∀ ∈
7. Stability analysis 7.2. Lyapunov functions
…for discrete-time systems
o Proposition (Asymptotic stability): If there exists a
Lyapunov function
and
then the origin is an asymptotically stable equilibrium
point for
with region of attraction S.
13.01.2014
15:23
( )( ) ( ) 0, , with 0V f x V x x S x− < ∀ ∈ ≠
( ) as V x x→∞ →∞
( ) ( )( )1x k f x k+ =
7. Stability analysis 7.2. Lyapunov functions
…for discrete-time systems
13.01.2014
15:23
� Consider the linear DT system
and the candidate Lyapunov function
•
•
•
( ) ( )1x k Ax k+ =
( )0 0V =( ) ( )if P 0, 0 and as V x V x x> →∞ →∞f
( ) ( ) ( ) ( ) ( )T T T TV Ax V x Ax P Ax x Px x A PA P x− = − = −
( ) TV x x Px=
7. Stability analysis 7.2. Lyapunov functions
Lyapunov stability for discrete-time linear systems
13.01.2014
15:23
� Problem: To ensure asymptotic stability of the
system
chose such that
� Choosing P to satisfy the above is possible if, for
some , the discrete Lyapunov equation
has a positive definite solution
� System is stable if and only if can be found for
any
( ) ( )1x k Ax k+ =0P f 0TA PA P− p
0Z fTA PA P Z− = −
0P f
0P f
0Z f
7. Stability analysis 7.2. Lyapunov functions
Lyapunov stability for discrete-time linear systems
� Consider a discrete time system
with continuous and
o Definition (Control Lyapunov function): A continuous
function defined on a region containing
the origin in its interior is called a Control Lyapunov
function if
1.
2.
3. There exists a continuous control law such that
13.01.2014
15:23
( ) ( ) ( )( )1 ,x k f x k u k+ =: n m nf × →� � � ( )0,0 0f =
:V S R→ nS ⊂ �
( )0 0V =
( ) 0, , if 0V x x S x> ∀ ∈ ≠
( )( )( ) ( ), 0,V f x K x V x x S− ≤ ∀ ∈( )u K x=
7. Stability analysis 7.2. Lyapunov functions
Control Lyapunov functions
o Proposition (Asymptotic Stability): If there exists a
Control Lyapunov function and a continuous control law
such that
and
then the origin is an asymptotically stable equilibrium point
for
with region of attraction S.
13.01.2014
15:23
( )( ) ( ) 0, , cu 0V f x V x x S x− < ∀ ∈ ≠
( ) as V x x→∞ →∞
( ) ( ) ( )( )1 ,x k f x k K x+ =
( )u K x=
7. Stability analysis 7.2. Lyapunov functions
Control Lyapunov functions
13.01.2014
15:23
� Consider the DT linear system
and the candidate Control Lyapunov function
and control law
•
•
•
( ) ( ) ( )1 ,x k Ax k Bu k+ = +
( )0 0V =( ) ( )if P 0, 0 si as V x V x x> →∞ →∞f
( )( ) ( ) ( )( ) ( )( )( ) ( )( )
T T
TT
V A BK x V x A BK x P A BK x x Px
x A BK P A BK P x
+ − = + + −
= + + −
( ) TV x x Px=( )u K x=
7. Stability analysis 7.2. Lyapunov functions
Lyapunov stability for discrete-time linear systems
13.01.2014
15:23
� Problem: To ensure asymptotic stability for the system
cchoose such that
� Choosing P to satisfy the above is possible if, for some
the discrete Lyapunov equation
has a positive definite solution
� is stable if and only if can be found for any
( ) ( ) ( )1x k Ax k Bu k+ = +0P f ( ) ( ) 0TA BK P A BK P+ + − p
0Z f
( ) ( )TA BK P A BK P Z+ + − = −0P f
0P f0Z f
( )A BK+
7. Stability analysis 7.2. Lyapunov functions
Lyapunov stability for discrete-time linear systems
13.01.2014
15:23
� Theorem (MPC stability): The origin is an asympotically
stable equlibrium point of the closed loop system
if the following conditions are satisfied:
1) Q and R are positive definite.
2) The terminal cost P is chosen such that and
where K is any matrix with
( ) ( ) ( )( )*01x k Ax k Bu x k+ = +
0P f
( ) ( )T TA BK P A BK P Q K RK+ + − = − −p
( ) 1.A BKρ + <
7. Stability analysis 7.3. MPC stability
13.01.2014
15:23
� For the closed loop MPC system
the value function will be used as a Lyapunov
function.
� Since Q and R are positive definite, it follows that:
1)
2)
3)
� Need to ensure in addition that
( ) ( ) ( )( )*01x k Ax k Bu x k+ = +( )*V ⋅
( )* 0 0V =( )* 0, 0TV x x Px x≥ > ∀ ≠( )* as V x x→∞ →∞
( )( ) ( )( )* *1 0, 0V x k V x k x+ − < ∀ ≠
7. Stability analysis 7.3. MPC stability
Proof
13.01.2014
15:23
� Consider the optimal input sequence
� At the next time, consider a shifted input sequence
where K is such that
( ) ( ) ( ) ( ){ }* * * *0 1 1, ,..., NU x u x u x u x−=( )U x%
( ) ( ) ( ) ( ) ( ){ }* * * * *0 1 2 1, , ..., ,N NU x u x u x u x u x− −=
( ) ( ) ( ) ( ) ( ){ }* * * *1 2 1, , ..., ,N NU x u x u x u x Kx x−=%applied
new tail
( ) 1.A BKρ + <
7. Stability analysis 7.3. MPC stability
Proof
13.01.2014
15:23
� Define the stage cost as
� Define the terminal cost as
� Recall
� Find the cost associated with the shifted sequence
( ), : .T Tl x u x Qx u Ru= +( ) : .TfV x x Px=
( ) ( ) ( )( )*01 .x k Ax k Bu x k+ = +
( )( ) ( ) ( )( )( )( )( )
( ) ( )( )( )( )( )
( )( ) ( )( )( )( ) ( )( )( )
*
*
0
*
* *
*
: 1 ,
old optimal cost
, old first stage cost
old terminal cost
, new ( -1) stage cost
new terminal cost
f N
th
N N
f N
U x k V x k U x k
V x k
l x k u k
V x x k
l x x k Kx x k N
V A BK x x k
+ =
+
−
−
+
+ +
% %
7. Stability analysis 7.3. MPC stability
Proof
13.01.2014
15:23
� Problem: Find conditions that the last hree terms are
negative:
� Chose the terminal cost to be a control Lyapunov
function with the terminal control law
� Remember that the stage cost
for all when Q and R are positive definite
( )( )( ) ( )( ) ( )( )( ) ( ) ( )( )( )* * * *, ( ), 0x u ≠
7. Stability analysis 7.3. MPC stability
Proof
13.01.2014
15:23
� For quadratic cost functions, it can be ensured that:
by requiring that
� Can choose if and only if
� Define and solve the inequation
� Note that
( ) ( ) ( )- , 0, 0f fV Ax BKx V x l x Kx x+ ≤ − < ∀ ≠
( ) ( )T TA BK P A BK P Q K RK+ + − = − −p
0P f ( ) 1A BKρ + <: TZ Q K RK= +
0 and 0 0Q R Z⇒f f f
7. Stability analysis 7.3. MPC stability
Proof
13.01.2014
15:23
� Conditions were established to ensure that:
� Since the stage cost , then
� Therefore
� This proves that the value function is a control
Lyapunov function with the MPC control law
( ), 0l x u ≥
( ) ( )( )( ) ( )( ) ( ) ( )( )* *01 , , , 0V x k U x k V x k l x k u k x+ < − ∀ ≠%
( ) ( )( )( ) ( )( )*1 , , 0V x k U x k V x k x+ < ∀ ≠%
( )( ) ( )( )* *1 , 0V x k V x k x+ < ∀ ≠( )*V ⋅
( ) ( )( )*u k u x k=
7. Stability analysis 7.3. MPC stability
Proof
� Problem: Given the initial state x = x(k), compute a finite
horizon input sequence that minimizes the finite horizon
cost function
where
subject to
13.01.2014
15:23
( )1
0
NT T T
N N i i i i
i
x Px x Qx u Ru−
=
+ +∑
0
1 , 0,1,..., 1i i i
x x
x Ax Bu i N+
=
= + = −
, 0,1,..., 1i iMx Eu b i N+ ≤ = −
7. Stability analysis 7.4. MPC feasibility
Optimal control with finite horizon with constraints
� Definition (Invariant set): The setul is called an
invariant set for the system
if
� Definition (Constraint admissible set): Given a control
law a set of states and a set of
constraints the set S is constraint admissible
if
� For the previously defined problem
13.01.2014
15:23
( ) ( )( )1x k f x k+ =
nS ⊂ �
( ) ( )( )0 , 0.x S f x k S k∈ ⇒ ∈ ∀ ≥
( ) ,u K x= ,nS ⊂ �,n mZ ⊂ ×� �
( )( ), ,x K x Z x S∈ ∀ ∈
( ){ }: , :Z x u Mx Eu b= + ≤
7. Stability analysis 7.4. MPC feasibility
Invariant sets for discrete-time systems
� Given K such that a matrix and a vector
can be chosen such that
is invariant for the closed loop system
and constraint admissible for the control law and
constraint set Z.
� For every , it is required that:
13.01.2014
15:23
( ) 1,A BKρ + < NMNb
{ }: :n N NS x M x b= ∈ ≤�
( ) ( ) ( )1x k A BK x k+ = +u Kx=
x S∈
( )( )
(invariant)
(constraint admissible)
N NM A BK x b
M EK x b
+ ≤
+ ≤
7. Stability analysis 7.4. MPC feasibility
Invariant terminal constraints
� Problem: Given the initial state x = x(k), compute a finite
horizon input sequence
that minimizes the finite horizon cost function
where
13.01.2014
15:23
{ }0 1 1, ,..., Nu u u −
( ) ( )1
0
, :N
T T T
N N i i i i
i
V x U x Px x Qx u Ru−
=
= + +∑
0
1
,
, 0,1,..., 1
, 0,1,..., 1
. (invariant and constraint admissible)
i i i
i i i i i
N N N
x x
x Ax Bu i N
M x E u b i N
M x b
+
=
= + = −
+ ≤ = −
≤
7. Stability analysis 7.4. MPC feasibility
Optimal control with finite horizon with constraints
13.01.2014
15:23
� Recall the shifted input sequence
� If is feasible at time k, then the shifted sequence
is feasible at time k+1 if is implemented.
� Note that:
( ).U x%
( ) ( ) ( ) ( ) ( ){ }* * * * *0 1 2 1, , ..., ,N NU x u x u x u x u x− −=
( ) ( ) ( ) ( ) ( ){ }* * * *1 2 1, , ..., ,N NU x u x u x u x Kx x−=%applied
new tail
( )*U x ( )U x%( ) ( )*0u k u x=
( ) ( )( ) ( )
* *
*
*
N N N
N N N
N N N
M x x EKx x bM x b
M A BK x x b
+ ≤≤ ⇒
+ ≤
7. Stability analysis 7.4. MPC feasibility
Proof
13.01.2014
15:23
� Theorem (MPC feasibility) Consider the system
with state and input constraints
� If the terminal constraint is chosen to be a
constraint admissible invariant set for the closed loop
system
for some K and the constrained finite horizon control
problem is feasible at time k=0, then it is feasible for all
if the control input is given by the MPC law1k ≥
( ) ( ) ( )1x k Ax k Bu k+ = +.Mx Eu b+ ≤
N N NM x E u b+ ≤
( ) ( ) ( )1x k A BK x k+ = +
( ) ( )( )*0 .u k u x k=
7. Stability analysis 7.4. MPC feasibility
13.01.2014
15:23
� In order to guarantee infinite horizon constraint satisfaction
and stability:
• Chose Q and R positive definite.
• Compute a stabilizing terminal control law
� Stability
• Compute a positive definite P such that the terminal cost
is a Control Lyapunov function with
� Feasibility
• Compute a and such that the terminal constraint
is constraint admissible and invariant for the closed loop system
.u Kx=
T
fV x Px=.u Kx=
NM Nb
N N NM x E u b+ ≤
( ) ( ) ( )1 .x k A BK x k+ = +
7. Stability analysis 7.5. Summary
Model-based Predictive Control -
project
MbPC
S.l. dr. ing. Constantin Florin Caruntu
www.ac.tuiasi.ro/~caruntuc
State-space
model predictive controlOutline
5. Introduction in SS-MPC
6. State-space MPC
7. Stability analysis
8. Robustness analysis
13.01.2014
15:23
State-space
model predictive control
Lecture 8 – Robustness analysis
� So far, only the problem of regulating the states and
inputs around the origin was considered.
� In practice, some nonzero, time-varying setpoint or
reference signal has to be folloed.
• Aircraft landing by autopilot.
• Robot arm with pre-specified trajectory.
• Radar tracking problems.
� In what follows, the tracking of piecewise constant
reference signals will be considered.
13.01.2014
15:23 8. Robustness analysis 8.1. Reference tracking
13.01.2014
15:23
� A linear discrete time state-space system
� → state vector
� → input vector (manipulated variables))
� → output vector (measured variables)
� → controlled variables
( ) ( ) ( )( ) ( ) ( )( ) ( )
x kT T Ax kT Bu kT
y kT Cx kT Du kT
z kT Hx kT
+ = +
= +
=nx∈�mu∈�py∈�qz∈�
8. Robustness analysis 8.1. Reference tracking
Discrete-time linear systems
13.01.2014
15:23
� Control the system so that as if the reference
signal tends to a constant value (assume future values of r are known)
� At each time, given the current reference value
� Target Calculator computes target input and target state
� Regulator controls the system around the target
( ) ( )z k r k→ k →∞
( )r k( )u k∞ ( )x k∞
( ) ( )( ),x k u k∞ ∞
8. Robustness analysis 8.1. Reference tracking
General problem
13.01.2014
15:23
� Given a linear state feedback control gain (either from
uncosntrained MPC or LQR) with implement the
control law:
� Closed loop system is stable if and are constant
� Want to design the target calculator such that as
K
( ) 1A BKρ + <( ) ( ) ( ) ( )( )ˆ |u k u k K x k k x k∞ ∞= + −
( )u k∞ ( )x k∞( ) ( )z k r k→ k →∞
8. Robustness analysis 8.1. Reference tracking
General problem
� All of the outputs are not always controlled variables
(i.e., ).
� It is generally not possible to control all of the outputs
or states to an arbitrary setpoint
� Example: it is not possible to maintain a car in a
fixed position while also maintaining a nonzero
velocity.
� Generally, the controlled variables are a linear
combination of the states or outputs
� Problem:Which choices of H are allowed ?
13.01.2014
15:23
( ) ( ) so that H C z k y k≠ ≠
( )z k( )x k ( )y k
8. Robustness analysis 8.1. Reference tracking
Choice of controlled variables
o Definition (Target equilibrium pair): Given a reference
input r for a linear discrete time system, the pair
is called an (offset free) target equilibrium pair if
� Rearranging the above gives
� Depending on the values of H and r target equilibrium pair
may exist or may not exist.
13.01.2014
15:23
x Ax Bu
Hx r
∞ ∞ ∞
∞
= +
=
( ) ( ),x k u k∞ ∞
( ) 00
xI A B
u rH
∞
∞
− − =
8. Robustness analysis 8.1. Reference tracking
Target equilibrium pairs
o Proposition: A sufficient condition for guaranteeing the
existence of a target equilibrium pair for any reference r is
that
is full row rank.
� The above condition implies that:
• H has to be full row rank
• Number of inputs >= number of controlled variables
13.01.2014
15:23
( )0
I A B
H
− −
m q≥
8. Robustness analysis 8.1. Reference tracking
Target equilibrium pairs existence
� At each time instant, the target calculator solves the
following set of linear inequalities
� Note that if a solution exists, it may not be unique.
� If are calculated as above, then it is possible
to show that as if:
• The sequence converges to a constant value
• The controller is
• The matrix and the observer are stable
13.01.2014
15:23
( ) 00
xI A B
u rH
∞
∞
− − =
( ) ( )( ),x k u k∞ ∞( ) ( )z k r k→ k →∞
( )r ⋅
( ) ( ) ( ) ( )( )ˆ |u k u k K x k k x k∞ ∞= + −A BK+
8. Robustness analysis 8.1. Reference tracking
Target equilibrium pairs computation – without constraints
13.01.2014
15:23
� Future values of and are assumed unknown
� The observer estimates the current state and disturabnce
o Problem: Control the system so that as
reference signal and disturbance tend to constant values
( ) ( )z k r k→ k →∞
( )r ⋅ ( )d ⋅( )ˆ |x k k
( )ˆ |d k k
8. Robustness analysis 8.2. Disturbance rejection
General problem
13.01.2014
15:23
� Consider a linear model with a constant disturbance acting
on the states and outputs
� Disturbance inputs , with
� Estimate by forming an augmented system for the
states and disturbances
( ) ( ) ( ) ( )( ) ( )( ) ( ) ( )( ) ( ) ( )
1
1
d
d
d
x k Ax k Bu k B d k
d k d k
y k Cx k C d k
z k Hx k H d k
+ = + +
+ =
= +
= +
( ) ld k ∈�, ,n l n l n ld d dB C H
× × ×∈ ∈ ∈� � �
( )d k( )x k ( )d k
8. Robustness analysis 8.2. Disturbance rejection
Constant disturbance model
13.01.2014
15:23
� Augment the state of the system with the disturbance
� An observer will be used to provide state and
disturbance estimates
( )( )
( )( ) ( )
( ) ( ) ( )( )
( ) ( ) ( )( )
1
1 0 0
d
d
d
x k x kA B Bu k
d k d kI
x ky k C C
d k
x ky k H H
d k
+ = + +
=
=
( )ˆ |x k k( )ˆ |d k k
8. Robustness analysis 8.2. Disturbance rejection
Extended model with disturbances
� The augmented system is not guaranteed to be detectable
for every
o Proposition (Detectability): The augmented system is
detectable if and only if is detectable and the matrix
is full column rank.
� The above implies that:
• The number of disturbances has to be smaller or equal
than the number of outputs (for detectability)
13.01.2014
15:23
( )0
I A B
H
− −
and d dB C
( ),C A
8. Robustness analysis 8.2. Disturbance rejection
Estimating states and disturbances
� At each time instant, target calculator solves the following
set of linear equalities
or, in matrix form
� As in the undisturbed case, this is solvable given an
appropriate rank condition.
13.01.2014
15:23
( ) ˆˆ0
d
d
B dxI A B
uH r H d
∞
∞
− − = −
ˆ
ˆ
d
d
x Ax Bu B d
r Hx H d
∞ ∞ ∞
∞
= + +
= +
8. Robustness analysis 8.2. Disturbance rejection
Target equilibrium pairs computation – with constraints
� Let the following conditions hold:
• The sequence and tend to constant values.
• The rank conditions are satisfied.
• The state/disturbance observer is stable.
• The control input is given by
where are chosen as on previous slide.
• The matrix K is such that is stable.
• Number of disturbances = number of outputs.
� If all of the above hold, then
13.01.2014
15:23
( )r ⋅ ( )d ⋅
( ) ( ) ( ) ( )( )ˆ |u k u k K x k k x k∞ ∞= + −( ) ( )( ),x k u k∞ ∞
( )A BK+
( ) ( ) as z k r k k→ →∞
8. Robustness analysis 8.2. Disturbance rejection
Results
� Suppose there are constraints on the states and inputs
� As before, assume that:
• The rank conditions are satisfied.
• The state/disturbance observer is stable.
• Number of disturbances = number of outputs.
� Aditional condition: are such that target
equilibrium pairs satisfying the above constraint always
exist.
13.01.2014
15:23
, 0,1,..., 1
, 1,...,
low i high
low i high
u u u i N
y y y i N
≤ ≤ = −
≤ ≤ =
( ) ( )ˆ and |r k d k k
8. Robustness analysis 8.2. Disturbance rejection
Target equilibrium pairs computation – with constraints
13.01.2014
15:23
� At each time instant, the target calculator is given the
current reference r and the current disturbance estimate
� The target calculator computes the target equilibrium pair
by solving the following quadratic program:
subject to:
� The ideal steady state for the inputs is
( ) ˆˆ0
d
d
B dxI A B
uH r H d
∞
∞
− − = −
d̂
( ),x u∞ ∞
( ) ( ),
1min
2
T
x uu u u u
∞ ∞∞ ∞− −
ˆ
low high
low d high
u u u
y Cx C d y
∞
∞
≤ ≤
≤ + ≤.u
8. Robustness analysis 8.2. Disturbance rejection
Target equilibrium pairs computation – with constraints
� Problem: Given compute an optimal finite
horizon sequence of inputs that minimize
subject to
� Can be written as a quadratic program.
13.01.2014
15:23
{ }0 1 1, ,..., Nu u u −
( ) ( ) ( ) ( )
( ) ( )
1
0
NT T
i i i i
i
T
N N
x x Q x x u u R u u
x x P x x
−
∞ ∞ ∞ ∞=
∞ ∞
− − + − − +
+ − −
∑
0
1
ˆ
ˆ, 0,1,..., 1
, 0,1,..., 1
ˆ , 0,1,...,
i i i d
low high
low d high
x x
x Ax Bu B d i N
u u u i N
y Cx C d y i N
+
∞
∞
=
= + + = −
≤ ≤ = −
≤ + ≤ =
ˆˆ, , and x u x d∞ ∞
8. Robustness analysis 8.2. Disturbance rejection
Predictive control with disturbance rejection
13.01.2014
15:23
� At each time instant k:
• Observer takes measurement and computes extimates
and
• Target calculator computes
• Regulator computes an input by solving the finite horizon problem
as a quadratic problem and by implementing the first input
sequence
• If quadratic problems are always feasible and the system is stable,
then
( )y k ( )ˆ |x k k( )ˆ |d k k
( ) ( ) and x k u k∞ ∞
( ) ( )( )*0u k u x k=
( ) ( ) as z k r k k→ →∞
8. Robustness analysis 8.2. Disturbance rejection
Predictive control with disturbance rejection