16

Issues in closed-loop identification

Embed Size (px)

Citation preview

Issues in closed-loop identi�cationUrban Forssell and Lennart LjungDepartment of Electrical Engineering, Link�oping UniversityS-581 83 Link�oping, SWEDEN.E-mail: [email protected], [email protected]: LiTH-ISY-R-1940AbstractIn this contribution we study the statistical propertiesof a number of closed-loop identi�cation methods andparameterizations. A focus will be on asymptotic vari-ance expressions for these methods. By studying theasymptotic variance for the parameter vector estimateswe show that indirect methods fail to give better ac-curacy than the direct method. Some new results forbias distribution in closed-loop experiments will also bepresented. It is also shown how di�erent methods cor-respond to di�erent parameterizations and how directand indirect identi�cation can be linked together viathe noise model.1 Introduction\Identi�cation for control" has drawn signi�cant inter-est the past few years (see e.g., [1, 6, 12]). The objectiveis to achieve a model that is suited for robust control de-sign. Thus one has to tailor the experiment and prepro-cessing of data so that the model is reliable in regionswhere the design process does not tolerate signi�cantuncertainties. The use of closed-loop experiments hasbeen a prominent feature in these approaches. Otherreasons for performing identi�cation experiments underoutput feedback (i.e., in closed-loop) may be that theplant is unstable, or that it has to be controlled for pro-duction economic or safety reasons, or that it containsinherent feedback mechanisms. The task in closed-loopidenti�cation is to obtain good models of the open-loopsystem despite the feedback.-r1++ e -u G0 - e++?v -y?e-+� r2�Fy6Figure 1: The closed-loop systemIn many cases we will not need to know the feedback

mechanism, but for some of the analytic treatment weshall work with the linear output feedback setup de-picted in Figure 1: The true system isy(t) = G0(q)u(t) + v(t)= G0(q)u(t) +H0(q)e(t) (1)Here fe(t)g is white noise with variance �0. The regu-lator is u(t) = r1(t) + Fy(q)(r2(t)� y(t))where r1(t) may be regarded as a excitation signal andr2(t) as a set point signal. For our purposes here it willbe su�cient to consider the reference signalr(t) = r1(t) + Fy(q)r2(t)which will be done in the sequel. Thus the regulatorsimpli�es to u(t) = r(t) � Fy(q)y(t) (2)The reference signal fr(t)g is assumed independent ofthe noise fe(t)g.We also assume that the regulator stabilizes the sys-tem and that either G0(q) or Fy(q) contains a delay sothat the closed-loop system is well de�ned. The closed-loop system can be writteny(t) = G0(q)S0(q)r(t) + S0(q)H0(q)e(t) (3)where S0(q) is the sensitivity function,S0(q) = 11 + Fy(q)G0(q)With Gcl;0(q) = G0(q)S0(q)Hcl;0(q) = S0(q)H0(q)we can rewrite (3) asy(t) = Gcl;0(q)r(t) + vcl(t)= Gcl;0(q)r(t) +Hcl;0(q)e(t)

The input can be written asu(t) = S0(q)r(t) � Fy(q)vcl(t) (4)To reduce the notational burden we will from here onsuppress the arguments q, ei! and t whenever there isno risk of confusion.The spectrum for the input is�u(!) = jS0j2�r(!) + jFyj2jS0j2�v(!) (5)where �r(!) is the spectrum of the reference signal and�v(!) = jH0j2�0 the spectrum for the noise. We shalldenote the two terms�ru(!) = jS0j2�r(!)�eu(!) = jFyj2jS0j2�v(!)Consider the model setM = fM(�) : y(t) = G(q; �)u(t) +H(q; �)e(t);� 2 DM � RdgHere d = dim(�). We say that the true system is con-tained in the model set if for some �0 2 DM:G(q; �0) = G0(q); H(q; �0) = H0(q)This will also be written S 2 M. The case when thenoise model cannot be correctly described but wherethere is a �0 2 DM such thatG(q; �0) = G0(q)will be denoted G0 2 G.2 Approaches to closed-loopidenti�cationIt is important to realize that a directly applied predic-tion error method { applied as if any feedback did notexist { will work well and give optimal accuracy if thetrue system can be described within the chosen modelstructure (i.e., if S 2 M). Nevertheless, due to thepitfalls in closed-loop identi�cation, several alternativemethods have been suggested. One may distinguishbetween methods that(a) Assume no knowledge about the nature of the feed-back mechanism, and do not use r even if known.(b) Assume the regulator and the signal r to be known(and typically of the linear form (2)).(c) Assume the regulator to be unknown, but of a cer-tain structure (like (2)).If the regulator indeed has the form (2), there is nomajor di�erence between (a), (b) and (c): The noise-free relation (2) can be exactly determined based ona fairly short data record, and then r carries no fur-ther information about the system, if u is measured.

The problem in industrial practice is rather that noregulator has this simple, linear form: Various delim-iters, anti-windup functions and other non-linearitieswill have the input deviate from (2), even if the regula-tor parameters (e.g. PID-coe�cients) are known. Thisstrongly favors the �rst approach.In model-based control one can argue that it is impor-tant that the model explains the closed-loop behaviorof the plant as good as possible; correct modeling ofthe open-loop system is less critical, at least in somefrequency ranges. This would for instance be the caseif the regulator was known to contain an integral ac-tion in itself. Then the modeling of the low-frequencybehavior of the open-loop system would be less impor-tant in the subsequent control design. This motivatesthe two latter approaches.The methods for closed-loop identi�cation corre-spondingly fall into the following main groups (see [3]):1. The Direct Approach: Apply a prediction errormethod and identify the open-loop system usingmeasurements of the input u and the output y.2. The Indirect Approach: Identify the closed-loopsystem using measurements of the reference sig-nal r and the output y and use this estimate tosolve for the open-loop system parameters usingthe knowledge of the controller.3. The Joint Input-Output Approach: Identify thetransfer functions from r to y and from r to u anduse them to compute an estimate of the open-loopsystem.In the following we will analyze several methods forclosed-loop identi�cation. In particular we will studyseveral schemes for indirect and joint input-outputidenti�cation.2.1 Direct identi�cationIn the direct approach one typically works with modelsof the form y(t) = G(q; �)u(t) +H(q; �)e(t)The prediction errors for this model are given by"(t; �) = H(q; �)�1(y(t)�G(q; �)u(t))In general, the prediction error estimate is obtained as�N = argmin� 1N NXt=1 12"2F (t; �)with "F = L", and L is some stable pre�lter. We willassume L � 1 since the pre�lter can be included in thenoise model. The resulting estimates of the dynamicsmodel and the noise model will be denoted GN andHN , GN (q) = G(q; �N ); HN (q) = H(q; �N )

The direct identi�cation approach should be seen asthe natural approach to closed-loop data analysis. Themain reasons for this are� It works regardless of the complexity of the regula-tor, and requires no knowledge about the characterof the feedback.� No special algorithms and software are required.� Consistency and and optimal accuracy is obtainedif the model structure contains the true system (in-cluding the noise properties).There are two drawbacks with the direct approach:One is that we will need good noise models. In open-loop operation we can use output error models (andother models with �xed or independently parameter-ized noise models) to obtain consistent estimates (butnot of optimal accuracy) of G even when the noisemodel H is insu�cient. See Theorem 8.4 in [7].The second drawback is a consequence of this andappears when a simple model is sought that should ap-proximate the system dynamics in a pre-speci�ed fre-quency norm. In open-loop we can do so with theoutput error method and a �xed pre�lter/noise modelthat matches the speci�cations. For closed-loop data apre�lter/noise model that deviates from the true noisecharacteristics will introduce bias (cf. (22) below). Thenatural solution to this would be to �rst build a higherorder model using the direct approach, with small bias,and then reduce this model to lower order with theproper frequency weighting.Another case that shows the necessity of good noisemodels concerns unstable systems. For closed-loopdata, the true system to be identi�ed could very wellbe unstable, although the closed-loop system naturallyis stable. The prediction error methods require the pre-dictor to be stable. This means that any unstable polesof G must be shared by H , like in ARX, ARMAX andstate-space models. Output error models cannot beused for this case. Just as in the open-loop case, mod-els with common parameters between G and H requirea consistent noise model for the G-estimate to be con-sistent.2.2 Indirect identi�cationIf the regulator Fy is known and r is measurable, wecan use the indirect identi�cation approach. It consistsof two steps:1. Identify the closed-loop system from the referencesignal r to the output y.2. Determine the open-loop system parameters fromthe closed-loop model obtained in step 1, using theknowledge of the regulator.An advantage with the indirect approach is that anyidenti�cation method can be applied in the �rst step,since estimating the closed-loop system Gcl from mea-sured y and r is an open-loop problem. Therefore

methods like spectral analysis, instrumental variables,and subspace methods, that may have problems withclosed-loop data, also can be applied. One draw-backwith closed-loop identi�cation though, is that it is notclear, in general, how to perform the second step in anoptimal way. In principle, we have to solve the equationGcl = G1 + FyG (6)Typically, this gives an over-determined system of equa-tions in the parameters � which can be solved in manyways (see e.g., Section 5 below). The exact solution to(6) is of course G = Gcl1� FyGclbut this will lead to a high-order estimate G { typicallythe order of G will be equal to the order of Gcl plusthe order of the regulator Fy. For methods, like theprediction error method, that allow arbitrary parame-terizations Gcl(q; �) it is natural to let the parameters �relate to properties of the open-loop system G, so thatin the �rst step we should use a modely(t) = Gcl(q; �)r(t) +Hcl(q; �)e(t) (7)with Gcl(q; �) = G(q; �)1 + Fy(q)G(q; �) (8)This way the second step becomes super uous. Sinceidentifying Gcl in (7) is an open-loop problem con-sistency will not be lost if we chose a �xed noisemodel/pre�lter Hcl(q; �) = Hcl;�(q) to shape the biasdistribution of Gcl (cf. Section 3 below).The parameterization can be arbitrary, and we shallcomment on it below. It is quite important to realizethat as long as the parameterization describes the sameset of G, the resulting transfer function G(q; �N ) willbe the same, regardless of the parameterizations. Thechoice of parameterization may thus be important fornumerical and algebraic issues, but it does not a�ect thestatistical properties of the estimated transfer function.The dual-Youla methodA nice and interesting idea is to use the so calleddual-Youla parameterization that parameterizes all sys-tems that are stabilized by a certain regulator Fy (see,e.g., [14]). In the SISO case it works as follows. LetFy = X=Y (X , Y stable, coprime) and let Gnom =N=D (N , D stable, coprime) be any system that is sta-bilized by Fy . Then, as R ranges over all stable transferfunctions, the set�G : G(q; �) = N(q) + Y (q)R(q; �)D(q)�X(q)R(q; �)�

describes all systems that are stabilized by Fy. Theunique value of R that corresponds to the true plantG0 is given by R0 = D(G0 �Gnom)Y (1 + FyG0) (9)This idea can now be used for identi�cation (see,e.g., [4], [5], [12]): Given an estimate R of R0 we cancompute an estimate of the transfer function G asG = N + Y RD �XRNote that, using the dual-Youla parameterization wecan writeGcl(q; �) = L(q)Y (q)(N(q) + Y (q)R(q; �))where L = 1=(Y D + NX) is stable and inversely sta-ble. With this parameterization the identi�cation prob-lem (7) becomesz(t) = R(q; �)x(t) +H(q; �)e(t) (10)where z(t) = y(t)� L(q)N(q)Y (q)r(t)x(t) = L(q)Y 2(q)r(t)Thus the Dual-Youla parameterization is special pa-rameterization of the general indirect method. Laterwe will show that this parameterization does not a�ectthe statistical properties of G as we emphasized above.The main advantage of this method is of course thatthe obtained estimate G is guaranteed to be stabilizedby Fy, which clearly is a nice feature.2.3 Joint input-output identi�cationThe third main approach to closed-loop identi�cation isthe so called joint input-output approach. Recall thatwe have the closed-loop relationsy(t) = G0(q)S0(q)r(t) + S0(q)H0(q)e(t)u(t) = S0(q)r(t) � Fy(q)S0(q)H0(q)e(t) (11)The basic principle is now that once we have obtainedestimates of the closed-loop system Gcl;0 = G0S0 andthe sensitivity function S0 we could compute an esti-mate of G0. Note that estimating Gcl;0 and S0 areopen-loop problems so in principle all open-loop meth-ods can be used (cf. the �rst step of the indirect ap-proach).However, just as in the indirect approach, computingan estimate of the open-loop system using estimates ofclosed-loop transfer functions, e.g., Gcl;0 and S0, cancause troubles. For instance, the straight-forward ap-proach of dividing Gcl by S i.e.,G(q) = Gcl(q)S(q)

will lead to a high-order estimate of G0 and, further-more, the model order cannot be controlled and typ-ically the order of G will be the sum of the order ofGcl and the order of S. To deal with this researchershave come up with various methods/parameterizations.In this paper we will study two of these: the so calledcoprime factor identi�cation scheme and the two-stagemethod.Coprime factor identi�cationThe coprime factor identi�cation scheme [13] can beunderstood as follows. Rewrite (11) using the �lteredsignal x = Fr:y(t) = N0;F (q)x(t) + S0(q)H0(q)e(t)u(t) = D0;F (q)x(t) � Fy(q)S0(q)H0(q)e(t)Here N0;F = G0S0F�1 and D0;F = S0F�1The choice of F is discussed in, e.g., [13]. For our pur-poses here F can be any stable, linear �lter. IdentifyingN0;F and D0;F using measurements of y, u and x is stillan open-loop problem, since x and e are uncorrelated.Typically, a standard prediction error method is usedto �nd the estimates N and D. Then the open-loopmodel is retrieved byG(q) = N(q)D(q)A bene�t from using prediction error methods is thatNandD can be parameterized on a common-denominatorform N(q) = b(q)f(q) ; D(q) = a(q)f(q)This will give us control over the model order of G sincethen G(q) = b(q)a(q)The two-stage methodIt can be argued whether the two-stage method [11] isa joint input-output method or not. Below we will givean alternative interpretation of the algorithm that jus-tify this classi�cation. The two-stage method is usuallypresented as follows.1. Identify the sensitivity function S0 using measure-ments of u and r.2. Construct the signal u = Sr and identify the open-loop system as the mapping from u to y.This method deserves a couple of remarks.� Exact knowledge of the controller Fy is not nec-essary since in the �rst step only the sensitivityfunction is modeled.

� In the �rst step a high-order model of S0 can beused since we in the second step can control theopen-loop model order independently.� The simulated signal u will be the noise-free partof the input signal in the feedback system, thus uclearly is independent of the noise e.The simplicity and robustness of the two-stage methodmakes it an attractive alternative to closed-loop identi-�cation. For our analysis it will be convienient to studythe following reformulation of the two-stage method.Consider the following single input-two output model�u(t)y(t)� = � S(�)G(q; �)S(q; �)� r(t) + �H1(q) 00 H2(q)� e(t)(12)The parameter vector is� = ����Let "u(t; �) = H�11 (q)(u(t)� S(q; �)r(t)) (13)and "y(t; �) = H�12 (q)(y(t) �G(q; �)S(q; �)r(t)) (14)The prediction error for the model (12) equals"(t; �) = �"u(t; �)"y(t; �)�With these de�nitions we can reformulate the two-stagemethod asmin�;� limw2!1 1N NXt=1 "T (t; �) �1 00 w22� "(t; �) = (15)min�;� limw2!1 1N NXt=1 "2u(t; �) + w22"2y(t; �)This follows since, for large w2 the open-loop parame-ters � will minimize1N NXt=1 "2y(t; �) = 1N NXt=1 �H�12 (q)(y(t) �G(q; �)S(q; �)r(t))�2regardless of the value of �, which then will minimize1N NXt=1 "2u(t; �) = 1N NXt=1 �H�11 (q)(u(t) � S(q; �)r(t))�2Except for the additional weighting w2 this is equivalentto the standard joint input-output method, given thatwe parameterize the model as in (12).

2.4 A formal connection between directand indirect methodsThe noise model H in a linear dynamics model struc-ture has often turned out to be a key to interpretationof di�erent \methods". The distinction between themodels/\methods" ARX, ARMAX, output error, Box-Jenkins, etc, is entirely explained by the choice of thenoise model. Also the practically important feature ofpre�ltering is equivalent to changing the noise model.Even the choice between minimizing one- or k-step pre-diction errors can be seen as a noise model issue. See,e.g. [7], for all this.Therefore it should not come as a surprise that alsothe distinction between the fundamental approaches ofdirect and indirect identi�cation can be seen as a choiceof noise model.One important point of the prediction error approachis that the transfer functions G and H can be arbitrar-ily parameterized. Suppose that we have a closed-loopsystem with known regulator Fy as before. We param-eterize G as G(q; �) and H asH(q; �) = H1(q; �)(1 + Fy(q)G(q; �)) (16)We thus link the noise model to the dynamics model.There is nothing strange with that: So do ARX andARMAX models. Note that this particular parame-terization scales H1 with the inverse model sensitivityfunction.Now, the predictor fory(t) = G(q; �)u(t) +H(q; �)e(t)isy(tj�) = H�1(q; �)G(q; �)u(t) + (1�H�1(q; �))y(t)= H�11 (q; �) G(q; �)1 + Fy(q)G(q; �) (r(t) � Fy(q)y(t))+ y(t)�H�11 (q; �) 11 + Fy(q)G(q; �)y(t)= H�11 (q; �) G(q; �)1 + Fy(q)G(q; �) r(t) ++ (1�H�11 (q; �))y(t)Now, this is exactly the predictor also for the model ofthe closed-loop systemsy(t) = Gcl(q; �)r(t) +H1(q; �)e(t) (17)with the closed-loop transfer function parameterized interms of the open-loop one, as in (8).The indirect approach to estimate the system in termsof the closed-loop model (17) is thus identical to thedirect approach with the noise model (16). This is re-gardless of the parameterization of G and H1. Amongother things, this shows that we can use any theory de-veloped for the direct approach (allowing for feedback)to evaluate properties of the indirect approach.

3 Bias distributionWe shall now characterize in what sense the model ap-proximates the true system, when it cannot be exactlydescribed within the model class. This will be a comple-ment to the open-loop discussion in Section 8.5 of [7].We start with the direct method.3.1 The direct methodConsider the modely(t) = G(q; �)u(t) +H(q; �)e(t) (18)where G(q; �) is such that either Fy(q) or G(q; �) con-tains a delay.We have the prediction errors"(t) = 1H(q; �) (y(t)�G(q; �)u(t))= 1H(q; �)f(G0(q)�G(q; �))u(t) +H0(q)e(t)g= 1H(q; �) ( ~G(q)u(t)) + ( H0(q)H(q; �) � 1)e(t) + e(t)= 1H(q; �) ( ~G(q)u(t) + ~H(q)e(t)) + e(t)Here~G(q) = G0(q)�G(q; �) and ~H(q) = H0(q)�H(q; �)Insert (2) for u,"(t) = 1H(q; �) ( ~G(q)(S0(q)r(t)� Fy(q)S0(q)H0(q)e(t)) + ~H(q)e(t)) + e(t) (19)Our assumption that the closed-loop system was well-de�ned implies that ~GFy contains a delay, as well as ~H(since both H and H0 are monic). Therefore the lastterm of (19) is independent of the rest. Computing thespectrum of the �rst term we get (\*" denotes complexconjugate)1jH j2 h�uj ~Gj2 � 2Re ~H� ~GFyS0H0�0 + j ~H j2�0i= �ujH j2 ����� ~G� (FyS0H0)� ~H�0�u �����2 + �0j ~H j2jH j2 �1� �eu�u �Let us introduce the notation (B for \bias")B = (FyS0H0)� ~H�0�u(!)then the spectral density of " becomes�"(!) = �u(!)jH j2 ��� ~G�B���2 + �0j ~H j2jH j2 �1� �eu(!)�u(!)�+ �0(20)Note that jBj2 = �0�u(!) � �eu(!)�u(!) � j ~H j2 (21)

The limiting model will minimize the integral of �"(!),according to standard prediction error identi�cationtheory. We see that if Fy = 0 (open-loop operation) wehave B = 0 and �eu(!) = 0 and we re-obtain expres-sions that are equivalent to the expressions in Section8.5 in [7].Let us now focus on the case with a �xed noise modelH(q; �) = H�(q). This case can be extended to thecase of independently parameterized G and H . Recallthat any pre�ltering of the data or prediction errors isequivalent to changing the noise model. The expres-sions below therefore contain the case of arbitrary pre-�ltering. For a �xed noise model, only the �rst termof (20) matters in the minimization, and we �nd thatthe limiting model is obtained asGopt = argminG Z ��� jG0 �G�Bj2�u(!)jH�j2 d! (22)This is identical to the open-loop expression, except forthe bias term B. Within the chosen model class, themodel G will approximate the biassed transfer functionG0 � B as well as possible according the the weightedfrequency domain function above. The weighting func-tion �u(!)=jH�j2 is the same as in the open-loop case.The major di�erence is thus that an erroneous noisemodel (or unsuitable pre�ltering) may cause the modelto approximate a biassed transfer function.Let us comment the bias function B. First, note thatwhile G (in the �xed noise model case) is constrained tobe causal and stable, the term B need not be so. There-fore B can be replaced by its stable, causal component(the \Wiener part") without any changes in the discus-sion. Next, from (21) we see that the bias-inclinationwill be small in frequency ranges where either (or all)of the following holds� The noise model is good ( ~H is small)� The feedback contribution to the input spectrum(�eu(!)=�u(!)) is small� The signal to noise ratio is good (�0=�u(!) issmall)In particular, it follows that if a reasonably exible,independently parameterized noise model is used, thenthe bias-inclination of the G-estimate can be negligible.3.2 Indirect methodsFor a moment assume that Gcl is estimated using a pre-diction error method with a �xed noise model/pre�lterH� and that Gcl is parameterized according to (8). Ourmodel can thus be writteny(t) = Gcl(q; �)r(t) +H�(q)e(t)

We then have the following result for the bias. We knowthat the limiting estimate Gopt is given byGopt = argminG Z ��� ���� G01 + FyG0 � G1 + FyG ����2 �r(!)jH�j2 d!= argminG Z ��� ���� G0 �G1 + FyG ����2 jS0j2�r(!)jH�j2 d! (23)Now, this is no clear cut minimization of the distanceG0 � G. The estimate Gopt will be a compromise be-tween making G close to G0 and making 1=(1 + FyG)(the model sensitivity function) small. There will thusbe a \bias-pull" towards transfer functions that give asmall sensitivity for the given regulator, but unlike (22)it is not easy to quantify this bias component. How-ever, if the true system can be represented within themodel set, this will always be the minimizing model, sothere is no bias in this case.It can be questioned whether this \bias-pull" actuallyis harmful. Indeed, this shaping of the bias distributionby the unknown sensitivity function S0 that we get au-tomatically in the indirect methods has been one of themain reasons why authors have recommended indirectmethods for closed-loop identi�cation.The dual-Youla methodBefore turning to the joint input-output methods weremark that the dual-Youla method (10) applied witha �xed noise model H� gives the following expressionfor the resulting R estimateRopt = argminR Z ��� jR0 �Rj2�x(!)jH�j2 d!From (9) we get thatR0 �R = DY �G0 �Gnom1 + FyG0 � G�Gnom1 + FyG �= D(1 + FyGnom)Y (G0 �G)1 + FyG � S0= 1LY 2 (G0 �G)1 + FyG � S0Now, from �x(!) = jLj2jY j4�r(!) it follows that ex-pression (23) characterizes the bias distribution for thedual-Youla method also. This is in line with the state-ment that the statistical properties are independent ofthe parameterization.3.3 Joint input-output methodsHere we will only discuss the coprime factor identi�ca-tion scheme and the two-stage method.Coprime factor identi�cationThe bias distribution for the coprime factor identi�ca-tion scheme, when applied with �xed noise models, will

be characterized byminN;D Z ��� � jN0;F �N j2jH1j2 + jD0;F �Dj2jH2j2 ��x(!)d!By carefully choosing the �lter F and the noise modelsH1 and H2 we can shape the resulting bias in a control-relevant way as shown in, e.g., [13].The two-stage methodFor the two-stage method we obtain the following ex-pression for the resulting estimate Gopt from the secondstep Gopt = argminG Z ��� jG0S0 �GS�j2�r(!)jH�j2 d!where S� is the �xed estimate of the sensitivity functionobtained from the �rst step in the algorithm. Note thatjG0S0 �GS�j2 = j(G0 �G)S0 +G(S0 � S�)j2 (24)Thus it is clear that for cases where S� 6= S0 we willhave a bias-pull towards models G that minimize (24).On the other hand, if we in the �rst step have obtaineda very accurate (high order) estimate of the sensitivityfunction S0 this e�ect is negligible so thatjG0S0 �GS�j2 � jG0 �Gj2jS0j2Thus the mismatch G0 �G will be minimized in a fre-quency dependent norm that is shaped by the true sen-sitivity function. Finally we remark that it is inter-esting to compare this result with the correspondingresults for the indirect methods, e.g., expression (23).3.4 Arbitrary shaping of the bias distri-butionThe \dream result" for closed-loop experiments in con-nection with identi�cation for control would be to havea method that allows �tting the model to the data ina �xed, model-independent and user de�ned frequencydomain norm. This is possible for open-loop data us-ing pre�ltering and an output error model/method (likein (22) with B = 0). For closed-loop we either get biasas in (22) or model-dependent norms as in (23).It is not clear whether such a method exists for thegeneral case. However, [8] has pointed out and ana-lyzed such a method for the case of periodic referencesignals and time-invariant regulators: For a periodicreference signal, the parts of u and y that originatefrom r will be periodic after a transient. Now, averagey and u over periods corresponding to the period of r.These averages will then converge to a correct, noise-free input-output relationship for the system over oneperiod. Then use these averages as input and outputsin a direct output-error identi�cation scheme, possiblywith pre�ltering. This gives a method with the desiredproperties.

4 Asymptotic variance expres-sionsLet us now consider the asymptotic variance of theestimated transfer function GN using the AsymptoticBlack-Box theory of Section 9.4 in [7]; see also [2] forrelated results.4.1 The direct methodNote that the basic resultCov �GNHN� � nN�v(!) ��u(!) �ue(!)��ue(!) �0 ��1 (25)applies also to the closed-loop case. Here n is the modelorder, N the number of data, �v the spectrum of v =H0e, and �ue(!) the cross spectrum between input uand noise source e. From this general expression wecan directly solve for the upper left element:Cov GN � nN �v(!)�0�0�u(!)� j�ue(!)j2From (4) we easily �nd that�0�u � j�uej2 = �0jS0j2�r = �0�ruso Cov GN � nN �v(!)�ru(!) (26)The denominator of (26) is the spectrum of that partof the input that originates form the reference signal r.The open-loop expression has the total input spectrumhere.The expression (26) { which also is the asymptoticCramer-Rao lower limit { tells us precisely \the valueof information" of closed-loop experiments. It is thenoise-to-signal ratio (where \signal" is what derivesfrom the injected reference) that determines how wellthe open-loop transfer function can be estimated. Fromthis perspective, that part of the input that originatesfrom the feedback has no information value when esti-mating G. Since this property is, so to say, inherent inthe problem, it should come as no surprise that severalnewly suggested methods for indirect identi�cation canalso be shown to give the same asymptotic variance,namely (26) (see, e.g., [2] and Sections 4.2 and 4.3 be-low). The expression (26) also clearly pointsto the basic problem in closed-loop iden-ti�cation: The purpose of feedback isto make the sensitivity function small,especially at frequencies with distur-bances and poor system knowledge.Feedback will thus worsen the measureddata's information about the system atthese frequencies.

Note, though, that the \basic problem" is a practicaland not a fundamental one: There are no di�culties,per se, in the closed-loop data, it is just that in practi-cal use, the information contents is less. We could onpurpose make closed-loop experiments with good infor-mation contents (but poor control performance).Note that the output spectrum is, according to (3),�y(!) = jG0j2�ru(!) + jS0j2�v(!)The corresponding spectrum in open-loop operationwould be �openy (!) = jG0j2�u(!) + �v(!)This shows that it may still be desirable to perform aclosed-loop experiment: If we have large disturbancesat certain frequencies we can reduce the output spec-trum by (1�jS0j2)�v(!) and still get the same variancefor GN according to (26).Note that the basic result (26) is asymptotic whenthe orders of both G and H tend to in�nity. Let usnow turn to the case where the noise model is �xed,H(q; �) = H�(q). We will then only discuss the simplecase where it is �xed to the true valueH�(q) = H0(q)and where the bias in G is negligible. In that case thecovariance matrix of �N is given by the standard resultCov �N � �0N [ �E (t; �0) (t; �0)T ]�1where (t; �0) is the negative gradient of"(t; �) = 1H�(q) (y(t)�G(q; �)u(t))evaluated at � = �0. The covariance matrix is thusdetermined entirely by the second order properties (thespectrum) of the input, and it is immaterial whetherthis spectrum is a result of open-loop or closed-loopoperation.In particular we obtain in the case that the modelorder tends to in�nity thatCov GN � nN �v(!)�u(!)just as in the open-loop case.4.2 Indirect methodsWe will now turn to the variance properties of the indi-rect approach. According to the open-loop result, theasymptotic variance of Gcl;N will beCov Gcl;N � nN �v;cl(!)�r(!) = nN jS0j2�v(!)�r(!)regardless of the noise model H�. Here �v;cl(!) is thespectrum of the additive noise vcl in the closed-loop

system (3), which equals the open-loop additive noise,�ltered through the true sensitivity function. To trans-form this result to variance of the open-loop transferfunction, we use Gauss' approximation formulaCov G = dGdGclCov Gcl dGdGcl �It is easy to verify thatdGdGcl = 1S20so Cov GN � nN �v(!)jS0j2�r(!) = nN �v(!)�ru(!)which { not surprisingly { equals what the direct ap-proach gives, i.e., (26).The dual-Youla methodFor the dual-Youla method we getCov RN � nN �v;cl(!)�x(!) = nN jS0j2�v(!)�x(!)Now, using Gauss' approximation formula and �x(!) =jLj2jY j4�r(!) the relation we soon arrive atCov GN � nN jS0j2�v(!)jLj2jY j4�r(!) � ����LY 2S20 ����2 = nN �v(!)�ru(!)which is (26).4.3 Joint input-output methodsFor the joint input-output methods we note that wemay write (11) as�yu� (t) = �N0(q)D0(q)� r(t) + � 1�Fy(q)�S0(q)H0(q)e(t)(27)where N0 = G0S0 and D0 = S0. Now applying themultivariable version of the standard result (25) (referto [15] for details) to the situation (27) givesCov �NNDN� � nN �w(!)�r(!) (28)Here we have introduced the signalw(t) = � 1�Fy(q)�S0(q)H0(q)e(t)It follows thatCov �NNDN� � nN jS0j2�v(!)�r(!) � 1 �F �y�Fy jFy j2�Now, since G0 = N0=D0 we get from Gauss' approxi-mation formulaCovGN � nN jS0j2�v(!)�r(!) �� 1jD0j2 h1 N0D0 i � 1 �F �y�Fy jFyj2�" 1�N0D0��#

Carrying out the algebra and noting that D0 = S0 wesee that 1jD0j2 h1 N0D0 i � 1 �F �y�Fy jFy j2�" 1�N0D0��# == 1jS0j2 (j1 + FyG0j2) = 1jS0j4Thus CovGN � nN �v(!)jS0j2�r(!) = nN �v(!)�ru(!)just as in the previous cases. Analogous calculationswill show that this result holds for the coprime factoridenti�cation scheme for arbritrary choices of the �lterF . However, more interesting and novel is that thisalso shows that the two-stage method gives the sameasymptotic variance expression as the other closed-loopmethods. To see this we employ the alternative inter-pretation of the two-stage method provided by the cri-terion in (15). The weight w2 used there can actually beseen as a scaling of the noise model: H2 ! H2=w2 (cf.(14)). Now, since the result (28) in fact is independentof the noise model (open-loop operation) it follows thatthe above calculations hold for the two-stage method aswell.5 Covariance of the parametervector estimateIn this section we derive expressions for the covarianceof the parameter vector estimate for the direct methodand a number of indirect methods.5.1 The direct methodConsider again the model (18) and assume that thedynamics model and the noise model are independentlyparameterized, i.e. thatG(q; �) = G(q; �) and H(q; �) = H(q; �)where � and � refers to the following partitioning of theparameter vector �: � = ����Also assume that S 2 M (i.e., that the true system iscontained in the model set).It will be convienient to consider the following aug-mented signal: Let �0 = �ue�then its spectrum is��0(!) = ��u(!) �ue(!)��ue(!) �0 �

Now, since �ue(!) = �FyS0H0�0, we may also writethis as ��0(!) = �r�0(!) + �e�0(!) (29)where �r�0(!) = ��ru(!) 00 0��e�0(!) = �0 �FyS0H0�1 � �FyS0H0�1 ��With being the negative gradient of the predictionerrors "(t; �) = 1H(q; �) (y(t)�G(q; �)u(t))and with R� de�ned asR� = 1�0 �E (t; �0) (t; �0)Twe have Cov �N � 1N (R�)�1 (30)Using the frequency-domain results in Section 9.4 in [7]we see that R� can be rewritten asR� = 12� Z ��� 1�v(!)T 0�(ei!; �0)��0(!)T 0�(ei! ; �0)�d!where T = �G H� and where T 0� = dTd� . From (29) itfollows thatT 0���0(!)T 0�� = T 0��r�0(!)T 0�� + T 0��e�0(!)T 0��We may thus write R� = Rr� +Re�where Rr� is given byRr� = 12� Z ��� T 0���0(!)T 0�� d!= 12� Z ��� �ru(!)�v(!)G0�(ei!; �0)G0�(ei! ; �0)�d!Note that Rr� in only depends on �ru(!) and not thetotal input spectrum �u(!), as in the open-loop case.If we partition Rr� conformably with � we see that, dueto the chosen parameterization,Rr� = �Rr� 00 0�whereRr� = 12� Z ��� �ru(!)�v(!)G0�(ei! ; �0)G0�(ei!; �0)�d! (31)Returning to Re� we see thatRe� = 12� Z ��� T 0��e�0(!)T 0�� d!

Now, Re� can be partitioned asRe� = � Re� Re��Re�� Re� �and explicit expressions for Re�, Re�� and Re� can befound by usingT 0� �FyS0H0�1 � = FyS0H0G0� �H 0�However, we will not carry out the calculations here.Instead we note that an expression for the covarianceof �N can be extracted from the top left block of Cov �N(cf. (30)). Using the above expressions we soon arriveat Cov �N � 1N (Rr� +�)�1 (32)where � = Re� �Re��(Re�)�1Re�� � 0is the Schur complement of Re� in the matrix Re�. Notethat, since � � 0, this contribution has a positive e�ecton the accuracy. Here it is important to note that theterm � is entirely due to the noise part of the inputspectrum. We conclude that in the direct approach thenoise in the loop is utilized in reducing the variance.Further insight in the origin of the term � can begained through the following thought experiment. Ifthe identi�cation experiment is performed with no ex-ternal reference signal present (�r(!) = 0) we getCov �N � 1N��1Thus � characterizes the lower limit of achievable ac-curacy for the direct method.5.2 Indirect methodsLet us now try to derive similar expressions for indirectidenti�cation.Independently parameterized noise modelConsider the following modely(t) = Gcl(q; �)r(t) +Hcl(q; �)e(t) (33)where, as in (8), Gcl is parameterized in terms of theopen-loop parameters. Estimating � and � in (33) isan open-loop problem. All standard open-loop resultscan thus be applied also in this case. As an example,assume S 2 M, then we can immediately write downthe expressions for the covariance of �:Cov �N � 1N (Rcl)�1whereRcl = 12� Z ��� �r(!)�v;cl(!)G0cl;�(ei!; �0)G0cl;�(ei! ; �0)�d!

Note that G0cl;� = S20G0�and, since �v;cl = jS0j2�v, we see thatRcl = 12� Z ��� �ru(!)�v(!)G0�(ei! ; �0)G0�(ei! ; �0)�d!This is in fact identical to the expression (31), henceCov �N � 1N (Rr�)�1 (34)and as we remarked before this covariance will alwaysbe larger than the covariance obtained in the directmethod, equation (32). The di�erence stemming fromthe term � that is missing in (34). Thus, in terms ofaccuracy of the parameter estimates, the direct methodoutperforms the indirect.Fixed noise modelConsider now the case where the noise model is �xedH(q; �) = H�(q). Typically H� 6= H0 so that we are inthe situation S 62 M. The analysis above will thus notapply here. However, since estimating � in (33) withH(q; �) = H�(q) is an open-loop problem we can usethe standard open-loop covariance results for the caseof an inconsistent noise model. To this end, assumeG0 2 G. Then we get (cf. expression (9.55) in [7])Cov� � 1NR�1� Q�R�1�whereR� = 12� Z ��� �rjH�j2G0cl;�(ei!; �0)G0cl;�(ei! ; �0)�d!= 12� Z ��� jS0j2�rujH�j2 G0�(ei! ; �0)G0�(ei! ; �0)�d!andQ� = 12� Z ��� �v;cl�rjH�j4 G0cl;�(ei! ; �0)G0cl;�(ei!; �0)�d!= 12� Z ��� jS0j4�v�rujH�j4 G0�(ei!; �0)G0�(ei!; �0)�d!For all H�, R�1� Q�R�1� � (Rr�)�1with equality for H� = Hcl;0 = S0H0. Thus we havethe following ranking: The direct method gives bet-ter accuracy than the indirect noise model with an in-dependently parameterized noise model which in turngives better accuracy than the indirect method with a�xed noise model.

The dual-Youla methodSuppose we use the following parameterization of thedual-Youla method (10):z(t) = R(q; �)x(t) +H(q; �)e(t)This is an open-loop problem since x(t) and e(t) areindependent. Hence, if S 2 M, thenCov �N � 1N (RY oula)�1whereRY oula = 12� Z ��� �x(!)�v;cl(!)R0�(ei!; �0)R0�(ei! ; �0)�d!Now, since �x(!) = jLj2jY j4�r(!)�v;cl(!) = jS0j2�v(!)S0 = LY (D �XR0)R0� = L(D �XR0)2G0�we getRY oula = 12� Z ��� �ru(!)�v(!)G0�(ei! ; �0)G0�(ei! ; �0)�d!Thus RY oula = Rr�so that Cov �N � 1N (Rr�)�1Not surprisingly, the accuracy for this method is identi-cal to the accuracy for the other indirect methods withindependently parameterized noise models.5.3 Indirect identi�cation with optimalaccuracySo far in this section we have seen that the indirectmethods we have considered gives worse accuracy thanthe direct method. This is not the case in general forindirect identi�cation as we will see presently. In thefollowing we will review and extend a result that was�rst derived in [9] (see also [3] and [10] for related re-sults) and we will show that indirect identi�cation cangive the same level of accuracy as direct identi�cation.Suppose we identify the closed-loop system using anARMAX modelAcl(q)y(t) = Bcl(q)r(t) + Ccl(q)e(t)Thus with � denoting the closed-loop parameter vectorwe model the system dynamics asG(q; �) = Bcl(q)Acl(q)

while the noise model becomesH(q; �) = Ccl(q)Acl(q)Also assume S 2 M. Next, let the regulator is givenby Fy(q) = X(q)Y (q)where the polynomials X and Y are assumed coprime.Then, if we in the second step of the indirect schememodel the open-loop system asA(q)y(t) = B(q)r(t) + C(q)e(t)we get the following. FromGcl = G1 + FyGand Hcl = H(1 + FyG)it follows that (cf. (6))8><>:Acl = AY +BXBcl = BYCcl = CY (35)Equation (35) may be interpreted as a system of linearequations in the open-loop parameters ��� = �� (36)where � is completely determined by X and Y andwhere �� depends on the estimated closed-loop parame-ters � and Y . The exact de�nitions of � and �� are notimportant at this stage but explicit expressions can befound in Appendix A. The best unbiased estimate of �is the Markov estimate� = [�T (Cov ��)�1�]�1�T (Cov ��)�1��which gives Cov � = [�T (Cov ��)�1�]�1 (37)Now, by explicitly forming (36) and carrying out thealgebra it can be shown that (the proof is given in Ap-pendix A)Cov � � �0N [ �E (t; �0) (t; �0)T ]�1 (38)But (38) is equivalent to the open-loop expression (30).Thus, under certain circumstances, indirect identi�ca-tion gives the same accuracy as direct identi�cation.As will become more apparent below, the fact that thedynamics model and the noise model shares the samepoles seems crucial for obtaining the same accuracy aswith the direct approach.

Independently parameterized noise modelSuppose that we in the �rst step in the indirect methoduse the following model with an independently param-eterized noise modely(t) = Gcl(q; ��)r(t) +Hcl(q; ��)e(t)= Bcl(q)Fcl(q) r(t) + Ccl(q)Dcl(q)e(t)This is also known as a Box-Jenkins model. We assumealso in this case that S 2 M. Let the open-loop modelbe y(t) = G(q; ��)r(t) +H(q; ��)e(t)= B(q)F (q) r(t) + C(q)D(q)e(t)To �nd the indirect estimate of G0 we should thus solvefor B and F in (cf. (35))(Bcl = BYFcl = FY +BX (39)This system of equations which may be written in formsimilar to (36): ~��� = ��� (40)We may compute the Markov estimate�� = [~�T (Cov ���)�1~�]�1~�T (Cov ���)�1���This time, however, we do not obtain the open-loopexpression for the variance. Instead, as shown in ap-pendix B, we getCov �� � 1N (Rr�)�1 (41)whereRr� is given by (31). We once again conclude that,with an independently parameterized noise model, thisindirect method gives worse accuracy than the directmethod. The di�erence is quanti�ed by the term �(cf. (32)) which is missing in (41).5.4 Joint input-output methodsWe will now derive the corresponding covariance ex-pression for the joint input-output methods. We onlygive the results for the two-stage method.The two-stage methodConsider the following formulation of the closed-loopsystem�u(t)y(t)� = � S0(q)G0(q)S0(q)� r(t)+ ��Fy(q)S0(q)H0(q) 00 S0(q)H0(q)� e(t) (42)

Let � = � �1 �12�12 �2 � = Ee(t)eT (t)Our single input-two output model (12) is�u(t)y(t)� = � S(�)G(q; �)S(q; �)� r(t) + �H1(q) 00 H2(q)� e(t)For convinience we introduceVN (�; w2) = 1N NXt=1 12"T (t; �)W"(t; �)where W = �1 00 w22�and VN (�) = limw2!1VN (�; w2)The resulting prediction error estimate for the two-stage method is given by�N = argmin� VN (�)Our goal is to derive an expression for the covariance ofthe estimate �N . Using the standard results in Chapter9 in [7] we immediately can write down the followingexpression for the covariance of �NCov �N � 1NR�1QR�1where R = �EV 00N (��)and Q = limN!1N � Ef[V 0N (��)][V 0N (��)]T gAlternatively we can writeCov �N � 1N limw2!1R�1w2Qw2R�1w2where now Rw2 = �EV 00N (��; w2)andQw2 = limN!1N �Ef[V 0N (��; w2)][V 0N (��; w2)]T gLet (t; �) = � dd� "T (t; �)= �� dd�"u(t; �) � dd�"y(t; �)�= � u(t; �) y(t; �)�

ThenRw2 = �E (t; �)W T (t; �)= �Ef u(t; �) Tu (t; �) + w22 y(t; �) Ty (t; �)gLet Hu(q) = �Fy(q)S0(q)H0(q)H1(q) = 1Xi=0 hu(i)q�iand similarlyHy(q) = S0(q)H0(q)H2(q) = 1Xi=0 hy(i)q�iNow de�ne ~ (t; �) = � ~ u(t; �) ~ y(t; �)�where ~ u(t; �) = 1Xi=0 hu(i) u(t+ i; �)and ~ u(t; �) = 1Xi=0 hy(i) y(t+ i; �)ThenQw2 = �E ~ (t; �)W�W ~ T (t; �)= �Ef�1 ~ u(t; �) ~ Tu (t; �) + w22�12( ~ u(t; �) ~ Ty (t; �)+ ~ y(t; �) ~ Tu (t; �)) + w42�2 ~ y(t; �) ~ Ty (t; �)gIt follows thatCov �N � 1N limw2!1R�1w2Qw2R�1w2= �2N � �E y(t; �) Ty (t; �)��1 h �E ~ y(t; �) ~ Ty (t; �)i �� � �E y(t; �) Ty (t; �)��1The covariance matrix for �N will be the top left blockin this quantity. An explicit expression for this covari-ance matrix can be obtained as follows. LetRy = �E y(t; �) Ty (t; �) = � R� R��R�� R� �Then R�1y = " ��1 ���1R��R�1��R�1� R����1 ��1 #where � = R� �R��R�1� R��� = R� �R��R�1� R��If we now introduceQy = �E ~ y(t; �) ~ Ty (t; �) = � Q� Q��Q�� Q� �

we getCov �N � �2N ��1 hQ� �R��R�1� Q���Q��R�1� R�� +R��R�1� Q�R�1� R��i��1A special case is when the noise models are correct,i.e., when H1(q) = �Fy(q)S0(q)H0(q) and H2(q) =S0(q)H0(q). Then Qy = Ry so thatCov �N � �2N R�1yandCov �N � �2N ��1 = �2N hR� �R��R�1� R��i�1We conclude that �2N R�1� is an upper bound on Cov �N .In the frequency domain R� can be written (given thatH2(q) = S0(q)H0(q))R� = 12� Z ��� �r(!)jH0j2 G0�(ei!; �0)G0�(ei! ; �0)�d!This is the (inverse of the) covariancematrix that wouldhave resulted had we identi�ed the system in open-loop,i.e., u � r. Thus it follows that the two-stage methodgives worse accuracy than identi�cation in open-loop.The relation to the previous results for the closed-loopsituation is not obvious from the above expressions.6 Summarizing remarksWemay summarize the basic issues on closed-loop iden-ti�cation as follows:� The basic problem with closed-loop data is that ittypically has less information about the open-loopsystem { an important purpose of feedback is tomake the closed-loop system insensitive to changesin the open-loop system.� Prediction error methods, applied in a direct fash-ion, with a noise model that can describe the truenoise properties still gives consistent estimates andoptimal accuracy. No knowledge of the feedbackis required. This should be regarded as a primechoice of methods.� Several methods that give consistent estimates foropen-loop data may fail when applied in a directway to closed-loop identi�cation. This includesspectral and correlation analysis, the instrumentalvariable method, the subspace methods and out-put error methods with incorrect noise model.� If the regulator mechanism is correctly known, in-direct identi�cation can be applied. Its basic ad-vantage is that the dynamics model G can be cor-rectly estimated even without estimating any noisemodel. A draw-back is that indirect methods typ-ically give worse accuracy than the direct method.

� Joint input-output models can be applied when-ever the reference signal is measurable; knowledgeof the regulator is not necessary. The joint input-output methods provide consistent estimates of Geven with �xed pre�lters/noise models, just as theindirect methods, but give worse accuracy than thedirect method.AcknowledgmentsThe authors wish to thank Dr. Paul Van den Hof for in-spiring discussions and his contributions in the deriva-tion of the asymptotic variance results for the two-stagemethod.A Proof of (38)Let A(q) = 1 + a1q�1 + � � �+ anaq�naB(q) = b1q�1 + � � �+ bnbq�nbC(q) = 1 + c1q�1 + � � �+ cncq�ncand similarly for the closed-loop polynomials. We can,without loss of generality, take the regulator poly-nomials to beX(q) = x0 + x1q�1 + � � �+ xnxq�nxY (q) = 1 + y1q�1 + � � �+ ynyq�nyIt follows that � in (36) can be written (the partitioningrefers to the parameters in the A, B and C polynomials)� = 24�Y �X 00 �Y 00 0 �Y 35where�Y = 266666664 1y1 . . .... . . . 1y1...377777775 ; �X = 266666664 x0x1 . . .... . . . x0x1...

377777775while�� =

26666666666666664acl;1 � y1acl;2 � y2...bcl;0bcl;1...ccl;1 � y1ccl;2 � y2...

37777777777777775

We have (as in (37))Cov � = [�T (Cov ��)�1�]�1However,Cov �� � �0N � �E cl(t; �0) cl(t; �0)T ��1where cl is the negative gradient of"cl(t; �) = Acl(q)Ccl(q) (y(t)� Bcl(q)Acl(q)r(t))which, when evaluated at � = �0, becomes cl(t; �0) = 1Ccl;0(q) [�y(t� 1); : : : ;�y(t� nacl);r(t� 1); : : : ; r(t� nbcl); e(t� 1); : : : ; e(t� nccl)]T(43)Thus Cov � � �0N [ �E�T cl(t; �0) cl(t; �0)T�]�1But since Y (q)Ccl;0(q) = 1C0(q) andX(q)y(t)Ccl;0(q) � Y (q)r(t)Ccl;0(q) = � 1C0(q)u(t)we have�T cl(t; �0) = 1C0(q) [�y(t� 1); : : : ;�y(t� na);u(t� 1); : : : ; u(t� nb); e(t� 1); : : : ; e(t� nc)]T (44)It follows that�T cl(t; �0) = � dd� A(q)C(q) (y(t)� B(q)A(q)u(t)) � = �0= (t; �0)Hence Cov � � �0N [ �E (t; �0) (t; �0)T ]�1which is (38). 2B Proof of (41)Let B(q) = b1q�1 + � � �+ bnbq�nbF (q) = 1 + f1q�1 + � � �+ fnf q�nfC(q) = 1 + c1q�1 + � � �+ cncq�ncD(q) = 1 + d1q�1 + � � �+ dndq�nd

and similarly for the closed-loop polynomials and let Xand Y be as in appendix A. ~� and ��� in (40) becomes~� = ��Y 0�X �Y � ; ��� = 26666666664

bcl;1bcl;2...fcl;1 � y1fcl;2 � y2...37777777775Similar to the derivation in appendix A we then get�� = [~�T (Cov ���)�1~�]�1whileCov ��� � �0N � �E cl;�� (t; �0) cl;�� (t; �0)T ��1Here cl;�� is the negative gradient of the closed-looppredictions errors,"cl(t; �) = 1Hcl(q; ��) (y(t)�Gcl(q; ��)r(t))= 1Hcl(q; ��) (y(t)� Bcl(q)Fcl(q) r(t))taken w.r.t. ��, giving cl;�� (t; �0) = 1Hcl;0(q)Fcl;0(q) [r(t� 1); : : : ; r(t� nbcl);� Bcl;0(q)Fcl;0(q) (r(t � 1); : : : ; r(t� nfcl))]where Hcl;0 = S0H0. Now, sinceFcl;0(q)Y (q)�Bcl;0(q)X(q)F 2cl;0(q) = S20(q)F0(q)Bcl;0(q)Y (q)F 2cl;0(q) = S20(q)F0(q) � B0(q)F0(q)we get (after some calculations)~�T cl;�� (t; �0) = S0(q)H0(q)F0(q) [r(t � 1); : : : ; r(t � nb);� B0(q)F0(q) (r(t � 1); : : : ; r(t � nf ))]Tso~�T cl;�� (t; �0) = � S0(q)H0(q) � dd�� (y(t)� B(q)F (q)r(t)) �� = ��;0= S0(q)H0(q) � dd�� G(q; ��)r(t) �� = ��;0and (41) follows immediately after switching to the fre-quency domain. 2

References[1] M. Gevers. Towards a Joint Design of Identi�ca-tion and Control. In H. L. Trentelman and J. C.Willems, editors, Essays on Control: Perspectivesin the Theory and its Applications, pages 111{151.Birkh�auser, 1993.[2] M. Gevers, L. Ljung, and P. Van den Hof. Asymp-totic variance expression for closed-loop identi�ca-tion and their relevance in identi�cation for con-trol. 1997. To be presented at SYSID'97, Fukuoka,Japan.[3] I. Gustavsson, L. Ljung, and T. S�oderstr�om. Iden-ti�cation of Processes in Closed Loop | Identi�-ability and Accuracy Aspects. Automatica, 13:59{75, 1977.[4] F. R. Hansen. A fractional representation toclosed-loop system identi�cation and experimentdesign. Phd thesis, Stanford University, Stanford,CA, USA, 1989.[5] F. R. Hansen, G. F. Franklin, and R. Kosut.Closed-loop identi�cation via the fractional repre-sentation: experiment desgin. In Proceedings of theAmerican Control Conference, pages 1422{1427,Pittsburg, PA, 1989.[6] W. S. Lee, B. D. O. Anderson, I. M. Y. Mareels,and R. L. Kosut. On some key issues in the wind-surfer approach to adaptive robust control. Auto-matica, 31(11):1619{1636, 1995.[7] L. Ljung. System Identi�cation: Theory for theUser. Prentice-Hall, 1987.[8] T. McKelvey. Periodic excitation for identi�ca-tion of dynamic errors-in-variables systems oper-ating in closed loop. In Proc. 13th IFAC WorldCongress, volume J, pages 155{160, San Francisco,CA, 1996.[9] T. S�oderstr�om, L. Ljung, and I. Gustavsson. OnThe Accuracy Of Identi�cation And the DesignOf Identi�cation Experiments. Technical Report7428, Department of Automatic Control, Lund In-stitute of Technology, Lund, Sweden, 1974.[10] T. S�oderstr�om, P. Stoica, and B. Friedlander. AnIndirect Prediction Error Method for System Iden-ti�cation. Automatica, 27:183{188, 1991.[11] P. M. J. Van den Hof and R. J. P. Schrama. AnIndirect Method for Transfer Function Estimationfrom Closed Loop Data. Automatica, 29(6):1523{1527, 1993.[12] P. M. J. Van den Hof and R. J. P. Schrama. Iden-ti�cation and Control | Closed-loop Issues. Au-tomatica, 31(12):1751{1770, 1995.

[13] P. M. J. Van den Hof, R. J. P. Schrama, R. A.de Callafon, and O. H. Bosgra. Identi�cation ofnormalized coprime factors from closed-loop ex-perimental data. European Journal of Control,1(1):62{74, 1995.[14] M. Vidyasagar. Control System Synthesis: AFactorization Approach. MIT Press, Cambridge,Mass., 1985.[15] Y.-C. Zhu. Black-box identi�cation of MIMOtransfer functions: asymptotic properties of pre-diction error models. Int. Journal of AdaptiveControl and Signal Processing, 3:357{373, 1989.