7
Linear-quadratic-Gaussian control From Wikipedia, the free encyclopedia In control theory , the linear-quadratic-Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white Gaussian noise , having incomplete state information (i.e. not all the state variables are measured and available for feedback) and undergoing control subject to quadratic costs . Moreover the solution is unique and constitutes a linear dynamic feedback control law that is easily computed and implemented. Finally the LQG controller is also fundamental to the optimal control of perturbed non-linear systems. [1] The LQG controller is simply the combination of a Kalman filter i.e. a linear- quadratic estimator (LQE) with a linear-quadratic regulator (LQR). The separation principle guarantees that these can be designed and computed independently. LQG control applies to both linear time-invariant systems as well as linear time-varying systems. The application to linear time-invariant systems is well known. The application to linear time-varying systems enables the design of linear feedback controllers for non-linear uncertain systems. The LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension. Therefore implementing the LQG controller may be problematic if the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a-priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also the solution is no longer unique. Despite these facts numerical algorithms are available [2] [3] [4] [5] to solve the associated optimal projection equations [6] [7] which constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller. [2] Finally, a word of caution. LQG optimality does not automatically ensure good robustness properties. [8] The robust stability of the closed loop system must be checked separately after the LQG controller has been designed. To promote robustness some of the system parameters may be assumed stochastic instead of deterministic. The associated more difficult control problem leads to a similar optimal controller of which only the controller parameters are different. [3] Contents 1 Mathematical description of the problem and solution

Linear

Embed Size (px)

DESCRIPTION

linear control

Citation preview

Page 1: Linear

Linear-quadratic-Gaussian controlFrom Wikipedia, the free encyclopedia

In control theory, the linear-quadratic-Gaussian (LQG) control problem is one of the most

fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white

Gaussian noise, having incomplete state information (i.e. not all the state variables are measured and

available for feedback) and undergoing control subject to quadratic costs. Moreover the solution is unique

and constitutes a linear dynamic feedback control law that is easily computed and implemented. Finally the

LQG controller is also fundamental to the optimal control of perturbed non-linear systems.[1]

The LQG controller is simply the combination of a Kalman filter i.e. a linear-quadratic estimator (LQE) with

a linear-quadratic regulator (LQR). The separation principle guarantees that these can be designed and

computed independently. LQG control applies to both linear time-invariant systems as well as linear time-

varying systems. The application to linear time-invariant systems is well known. The application to linear

time-varying systems enables the design of linear feedback controllers for non-linear uncertain systems.

The LQG controller itself is a dynamic system like the system it controls. Both systems have the same state

dimension. Therefore implementing the LQG controller may be problematic if the dimension of the system

state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a-

priori the number of states of the LQG controller. This problem is more difficult to solve because it is no

longer separable. Also the solution is no longer unique. Despite these facts numerical algorithms are

available[2][3][4][5] to solve the associated optimal projection equations [6] [7]  which constitute necessary and

sufficient conditions for a locally optimal reduced-order LQG controller.[2]

Finally, a word of caution. LQG optimality does not automatically ensure good robustness properties.[8] The

robust stability of the closed loop system must be checked separately after the LQG controller has been

designed. To promote robustness some of the system parameters may be assumed stochastic instead of

deterministic. The associated more difficult control problem leads to a similar optimal controller of which

only the controller parameters are different.[3]

Contents

1   Mathematical description of the problem and solution

o 1.1   Continuous time

o 1.2   Discrete time

2   See also

3   References

Mathematical description of the problem and solution[edit source | edit beta ]

Continuous time

Consider the linear dynamic system,

Page 2: Linear

where   represents the vector of state variables of the system,   the vector of control inputs and   the

vector of measured outputs available for feedback. Both additive white Gaussian system noise   and

additive white Gaussian measurement noise   affect the system. Given this system the objective is to

find the control input history   which at every time   may depend only on the past

measurements   such that the following cost function is minimized,

where   denotes the expected value. The final time (horizon)   may be either finite or infinite. If the

horizon tends to infinity the first term   of the cost function becomes negligible and

irrelevant to the problem. Also to keep the costs finite the cost function has to be taken to be  .

The LQG controller that solves the LQG control problem is specified by the following equations,

The matrix   is called the Kalman gain of the associated Kalman filter represented by the first

equation. At each time   this filter generates estimates   of the state   using the past

measurements and inputs. The Kalman gain   is computed from the matrices  , the two

intensity matrices   associated to the white Gaussian noises  and   and

finally  . These five matrices determine the Kalman gain through the following

associated matrix Riccati differential equation,

Given the solution   the Kalman gain equals,

The matrix   is called the feedback gain matrix. This matrix is determined by the

matrices   and   through the following associated matrix Riccati differential

equation,

Given the solution   the feedback gain equals,

Page 3: Linear

Observe the similarity of the two matrix Riccati differential equations, the first one running forward in time,

the second one running backward in time. This similarity is called duality. The first matrix Riccati

differential equation solves the linear-quadratic estimation problem (LQE). The second matrix Riccati

differential equation solves the linear-quadratic regulator problem (LQR). These problems are dual and

together they solve the linear-quadratic-Gaussian control problem (LQG). So the LQG problem separates

into the LQE and LQR problem that can be solved independently. Therefore the LQG problem is

called separable.

When   and the noise intensity matrices  ,   do not

depend on   and when   tends to infinity the LQG controller becomes a time-invariant dynamic system. In

that case both matrix Riccati differential equations may be replaced by the two associated algebraic Riccati

equations.

Discrete time[edit source | edit beta ]

Since the discrete-time LQG control problem is similar to the one in continuous-time the description below

focuses on the mathematical equations.

Discrete-time linear system equations:

Here   represents the discrete time index and   represent discrete-time Gaussian white noise

processes with covariance matrices   respectively.

The quadratic cost function to be minimized:

The discrete-time LQG controller:

,

The Kalman gain equals,

where   is determined by the following matrix Riccati difference equation that runs forward in time,

The feedback gain matrix equals,

Page 4: Linear

where   is determined by the following matrix Riccati difference equation that runs backward in time,

If all the matrices in the problem formulation are time-invariant and if the horizon   tends to infinity the discrete-time LQG controller becomes time-invariant. In that case the matrix Riccati difference equations may be replaced by their associated discrete-time algebraicRiccati equations. These determine the time-invarant linear-quadratic estimator and the time-invariant linear-quadratic regulator in discrete-time. To

keep the costs finite instead J of one has to consider J/N in this case.

H-infinity methods in control theoryFrom Wikipedia, the free encyclopedia

  (Redirected from H infinity)

H∞ (i.e. "H-infinity") methods are used in control theory to synthesize controllers achieving stabilization

with guaranteed performance. To use H∞ methods, a control designer expresses the control problem as

a mathematical optimization problem and then finds the controller that solves this. H∞ techniques have the

advantage over classical control techniques in that they are readily applicable to problems involving

multivariable systems with cross-coupling between channels; disadvantages of H∞ techniques include the

level of mathematical understanding needed to apply them successfully and the need for a reasonably

good model of the system to be controlled. It is important to keep in mind that the resulting controller is only

optimal with respect to the prescribed cost function and does not necessarily represent the best controller

in terms of the usual performance measures used to evaluate controllers such as settling time, energy

expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These

methods were introduced into control theory in the late 1970's-early 1980's by George Zames (sensitivity

minimization),[1] J. William Helton (broadband matching),[2] and Allen Tannenbaum (gain margin

opimization).[3]

The phrase H∞ control comes from the name of the mathematical space over which the optimization takes

place: H∞ is the space of matrix-valued functions that are analytic and bounded in the open right-half of

the complex plane defined by Re(s) > 0; the H∞ norm is the maximum singular value of the function over

that space. (This can be interpreted as a maximum gain in any direction and at any frequency;

for SISO systems, this is effectively the maximum magnitude of the frequency response.) H∞ techniques

can be used to minimize the closed loop impact of a perturbation: depending on the problem formulation,

the impact will either be measured in terms of stabilization or performance.

Simultaneously optimizing robust performance and robust stabilization is difficult. One method that comes

close to achieving this is H∞ loop-shaping, which allows the control designer to apply classical loop-shaping

concepts to the multivariable frequency response to get good robust performance, and then optimizes the

response near the system bandwidth to achieve good robust stabilization.

Commercial software is available to support H∞ controller synthesis.

Contents

Page 5: Linear

1 Problem formulation

2 See also

3 References

4 Bibliography

Problem formulation

First, the process has to be represented according to the following standard configuration:

The plant P has two inputs, the exogenous input w, that includes reference signal and disturbances, and

the manipulated variables u. There are two outputs, the error signals z that we want to minimize, and the

measured variables v, that we use to control the system. v is used in K to calculate the manipulated

variable u. Notice that all these are generally vectors, whereas P and K arematrices.

In formulae, the system is:

It is therefore possible to express the dependency of z on w as:

Called the lower linear fractional transformation,   is defined (the subscript comes

from lower):

Therefore, the objective of   control design is to find a controller   such

that   is minimised according to the   norm. The same definition applies

to   control design. The infinity norm of the transfer function matrix   is

defined as:

where   is the maximum singular value of the matrix  .

Page 6: Linear

The achievable H∞ norm of the closed loop system is mainly given through the

matrix D11 (when the system P is given in the form

(A, B1, B2, C1, C2, D11, D12, D22, D21)). There are several ways to come to

an H∞ controller:

A Youla-Kucera parametrization of the closed loop often leads to very high-

order controller.

Riccati -based approaches solve 2 Riccati equations to find the controller, but

require several simplifying assumptions.

An optimization-based reformulation of the Riccati equation uses linear matrix

inequalities and requires fewer assumptions.