18
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 2413 Fast Recursive Subspace Adaptive ESPRIT Algorithms Peter Strobach, Senior Member, IEEE Abstract— A class of fast recursive ESPRIT algorithms for adaptive (on-line) source localization based on subspace track- ing and adaptive rank reduction is introduced. These adaptive ESPRIT algorithms can be used for on-line tracking of ma- neuvering sources in space using the output of an -element sensor array, where . The fastest of our algorithms requires only complex arithmetic operations to update the estimated directions-of-arrival (DOA’s) at each time instant. These highly efficient algorithms are more than only a concatenation of a subspace tracker and a conventional batch ESPRIT algorithm. A special QR-reduction to standard form is the key to the fast recursive algorithms. Detailed computer experiments substantiate the theoretical results. I. INTRODUCTION ESPRIT [1]–[3] is a method for determining the directions- of-arrival (DOA’s) of a set of narrowband signals impinging on an array of sensors. Ever since its inception, ESPRIT has attracted a great deal of attention. The attraction stems mainly from the fact that the technique does not require any knowledge of the array steering matrix to determine the desired DOA’s. Ouibrahim [4], [5] and Hua [6] observed that ESPRIT can be conveniently formulated as a complex generalized eigenvalue problem, where the DOA’s appear as the angles of the complex generalized eigenvalues (GEV’s) of a rectangular or square matrix pencil. Hence, ESPRIT is a root technique by definition. In this paper, we develop a class of fast recursive ESPRIT algorithms that update the DOA information at each time instant, depending on the new (incoming) data snapshot vector from a sensor array. These fully adaptive algorithms can be used for on-line tracking of emitting objects maneuvering in space. A key to the development of computationally efficient and reliable recursive ESPRIT algorithms is a clever reduction of the ESPRIT subspace pencil to the form of a standard complex eigenvalue problem. We use a special QR-reduction that meets the requirements of the recursive concept. A total least squares (TLS) smoothing can also be incorporated in this QR- reduction. We observed, however, that TLS complicates the re- cursive algorithms and even decreases the overall performance when the overmodeling factor is high. On the other hand, the algorithms based on a plain QR-reduction are extremely fast, well-structured, reliable, and unconditionally stable. The Manuscript received August 7, 1997; revised March 6, 1998. The associate editor coordinating the review of this paper and approving it for publication was Prof. Barry D. Van Veen. The author is with Fachhochschule Furtwangen, R¨ ohrnbach, Germany. Publisher Item Identifier S 1053-587X(98)06113-3. fastest of these algorithms requires only complex arithmetic operations at each time step. Such methods are well-suited for highly overdetermined problems, where the number of sensors is much larger than the number of sources . Our algorithms of this class also provide an adaptive (on-line) determination of the number of sources (ongoing and vanishing sources) automatically, without interrupting the fast recursions. This greatly facilitates a real-time implementation of the described algorithms. This paper is organized as follows. In Section II, we re- view the basic relationships of the ESPRIT subspace pencil and the low-rank ESPRIT GEV problem. We then develop the necessary relationships required for QR-reduction of the low-rank ESPRIT GEV problem to standard form. On this basis, the fast recursive ESPRIT algorithms are developed in Section III. At each instant of time, these algorithms produce a matrix of dimension with complex elements. The desired DOA’s arise as the angles of the complex eigenvalues of this matrix. These complex eigenvalues are all located on the unit circle by definition. Moreover, the complex eigenvalues of these matrices in subsequent time steps are just slightly rotated versions of each other. Thus, we have here a special problem of eigenvalue tracking. A detailed discussion of this problem and a class of algorithms that accomplishes a computationally efficient tracking of complex eigenvalues on the unit circle is available in [25]. A quasicode listing of the complex eigenvalue tracker that we used in this paper is provided in the Appendix. Section IV presents some instructive computer experiments that lend support to the theoretical findings. Section V summarizes the results. II. THE ESPRIT SUBSPACE PENCIL AND ITS REDUCTION TO STANDARD FORM In this section, we review the data model underlying the ESPRIT method and the basic relationships leading to the ESPRIT subspace pencil and the ESPRIT GEV problem. We then develop the necessary relationships for a QR-reduction of the ESPRIT GEV problem to standard form. We show how a TLS smoothing can be incorporated in this concept. A. The ESPRIT Data Model and the ESPRIT Generalized Eigenproblem A basic assumption in the ESPRIT method is that the sensors of an array can be grouped in two subarrays of sensors so that the subarrays are simply geometrically displaced versions of each other. Let and 1053–587X/98$10.00 1998 IEEE

Fast recursive subspace adaptive ESPRIT algorithms

Embed Size (px)

Citation preview

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998 2413

Fast Recursive SubspaceAdaptive ESPRIT Algorithms

Peter Strobach,Senior Member, IEEE

Abstract—A class of fast recursive ESPRIT algorithms foradaptive (on-line) source localization based on subspace track-ing and adaptive rank reduction is introduced. These adaptiveESPRIT algorithms can be used for on-line tracking of r ma-neuvering sources in space using the output of anN -elementsensor array, where N > 2r. The fastest of our algorithmsrequires only O(Nr) + O(r3) complex arithmetic operations toupdate the estimated directions-of-arrival (DOA’s) at each timeinstant. These highly efficient algorithms are more than only aconcatenation of a subspace tracker and a conventional batchESPRIT algorithm. A special QR-reduction to standard formis the key to the fast recursive algorithms. Detailed computerexperiments substantiate the theoretical results.

I. INTRODUCTION

ESPRIT [1]–[3] is a method for determining the directions-of-arrival (DOA’s) of a set of narrowband signals impingingon an array of sensors. Ever since its inception, ESPRIThas attracted a great deal of attention. The attraction stemsmainly from the fact that the technique does not require anyknowledge of the array steering matrix to determine the desiredDOA’s. Ouibrahim [4], [5] and Hua [6] observed that ESPRITcan be conveniently formulated as a complex generalizedeigenvalue problem, where the DOA’s appear as the angles ofthe complex generalized eigenvalues (GEV’s) of a rectangularor square matrix pencil. Hence, ESPRIT is a root techniqueby definition.

In this paper, we develop a class of fast recursive ESPRITalgorithms that update the DOA information at each timeinstant, depending on the new (incoming) data snapshot vectorfrom a sensor array. These fully adaptive algorithms can beused for on-line tracking of emitting objects maneuvering inspace.

A key to the development of computationally efficient andreliable recursive ESPRIT algorithms is a clever reduction ofthe ESPRIT subspace pencil to the form of a standard complexeigenvalue problem. We use a special QR-reduction thatmeets the requirements of the recursive concept. A total leastsquares (TLS) smoothing can also be incorporated in this QR-reduction. We observed, however, that TLS complicates the re-cursive algorithms and even decreases the overall performancewhen the overmodeling factor is high. On the other hand,the algorithms based on a plain QR-reduction are extremelyfast, well-structured, reliable, and unconditionally stable. The

Manuscript received August 7, 1997; revised March 6, 1998. The associateeditor coordinating the review of this paper and approving it for publicationwas Prof. Barry D. Van Veen.

The author is with Fachhochschule Furtwangen, Rohrnbach, Germany.Publisher Item Identifier S 1053-587X(98)06113-3.

fastest of these algorithms requires onlycomplex arithmetic operations at each time step. Such methodsare well-suited for highly overdetermined problems, wherethe number of sensors is much larger than the number ofsources . Our algorithms of this class also provide an adaptive(on-line) determination of the number of sources (ongoing andvanishing sources) automatically, without interrupting the fastrecursions. This greatly facilitates a real-time implementationof the described algorithms.

This paper is organized as follows. In Section II, we re-view the basic relationships of the ESPRIT subspace penciland the low-rank ESPRIT GEV problem. We then developthe necessary relationships required for QR-reduction of thelow-rank ESPRIT GEV problem to standard form. On thisbasis, the fast recursive ESPRIT algorithms are developed inSection III. At each instant of time, these algorithms produce amatrix of dimension with complex elements. The desiredDOA’s arise as the angles of the complex eigenvalues of thismatrix. These complex eigenvalues are all located on the unitcircle by definition. Moreover, the complex eigenvalues ofthese matrices in subsequent time steps are just slightly rotatedversions of each other. Thus, we have here a special problemof eigenvalue tracking. A detailed discussion of this problemand a class of algorithms that accomplishes a computationallyefficient tracking of complex eigenvalues on the unit circleis available in [25]. A quasicode listing of the complexeigenvalue tracker that we used in this paper is provided inthe Appendix. Section IV presents some instructive computerexperiments that lend support to the theoretical findings.Section V summarizes the results.

II. THE ESPRIT SUBSPACE PENCIL AND

ITS REDUCTION TO STANDARD FORM

In this section, we review the data model underlying theESPRIT method and the basic relationships leading to theESPRIT subspace pencil and the ESPRIT GEV problem. Wethen develop the necessary relationships for a QR-reduction ofthe ESPRIT GEV problem to standard form. We show how aTLS smoothing can be incorporated in this concept.

A. The ESPRIT Data Model and the ESPRITGeneralized Eigenproblem

A basic assumption in the ESPRIT method is that thesensors of an array can be grouped in two subarrays of

sensors so that the subarrays are simply geometricallydisplaced versions of each other. Let and

1053–587X/98$10.00 1998 IEEE

2414 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

TABLE IFAST O(Nr2) ALGORITHM FOR COMPUTING

THE DOA-REVEALING MATRIX W(t)

Qx(t = 0) =Ir0

; Rx(t = 0) = Ir; Hw(t = 0) = 0

for t = 1; 2; 3; . . . for each time step do

Input from Subspace Tracker:Q(t� 1) ; �(t) ; �z?(t) ; f(t)

Partitions:Q(t� 1) =Vx(t� 1)Vy(t� 1)

�z?(t) =zx(t)zy(t)

(26), (29)

hx(t) = QHx (t� 1)zx(t) (32)

z?x (t) = zx(t)�Qx(t� 1)hx(t) (33)�z?x (t) = z?x (t)

�1

2z?x (t) (34)

Rx(t)0 � � � � � � 0

=Gx(t)Rx(t� 1)�(t) + hx(t)fH(t)

z?x (t) 2fH(t)

(37a)

GHx (t) =

�x(t) QHx (t� 1)qx(t)

fHx (t) �z?x (t)Hqx(t)

! �x(t) ; fx(t) (38)

Qx(t) = Qx(t� 1)�x(t) + �z?x (t)fHx (t) (39)

hvx(t) = VHy (t� 1)�z?x (t) (42a)

hqy(t) = QHx (t� 1)zy(t) (42b)

xy(t) = �z?x (t)Hzy(t) (42c)

Hw(t) = �Hx (t)Hw(t� 1) + fx(t)hHvx(t) �(t)

+ �Hx (t)hqy(t) + xy(t)fx(t) fH(t) (43)

W(t)Rx(t) =Hw(t)! back substitution!W(t) (45)

denote the snapshot vectors obtained from thetwo subarrays at time instant. Further, assume that a numberof independent narrowband signals plus statisticallyindependent white noise impinge on the sensor array. In thiscase, it can be shown that the snapshot vectors satisfy the datamodel

(1a)

(1b)

where is the array steering matrix,is the signal vector, and represent thestatistically independent noise components. is adiagonal matrix of unknown phase delays

diag (2)

The goal is the estimation of the unknown phase parametersfrom a sufficient set of observed snapshot

vectors. The desired angles of arrival of theindependentsignals can be computed directly from the estimated phaseparameters only if the signal frequencies are known and fixed[1]–[3].

Combine the snapshot vectors into a data vectoras

(3)

Further, define

(4)

Assuming noise-free data for the moment, the followingobservations can be made:

1) There exists a matrix whose column vectorsspan a subspace in which all data vectors can be

represented perfectly. can be estimated directly froma sufficient set of data vectors using appropriate subspaceestimation techniques.

2) The columns of and span the same subspace.Hence, can be posed as a rotated version of as

(5)

where is a subspace rotor.

3) Divide into the “split subspace” matrices

and to find that

(6a)

(6b)

Most importantly, the array steering matrix cancels out if wesubstitute (6a) into (6b) to obtain

(7)

It becomes evident that (7) can be interpreted as a generalizedeigenproblem. The phase delay matrix is obtained as thematrix of generalized eigenvalues of the unsymmetric matrixpencil . This is the basic result of ESPRIT.

B. QR-Reduction to Standard Form

To aid in practical computations, it is necessary that (7) bereduced to the form of a standard complex eigenvalue problem.A basic discussion about GEV reductions to standard form isgiven in [7]. More recent results are presented in [8]. Detailedtheoretical investigations have revealed that the following QR-reduction to standard form provides a convenient basis for thedevelopment of fast recursive ESPRIT algorithms.

Premultiply both sides of (7) by to obtain

(8)

Specify the subspace rotor so that . Then,it follows immediately that . Define

, where is a matrix with orthonormalcolumns, and is an upper-right triangular matrix.Therefore, .

Define a matrix as

(9)

Verify that .Substitute

into (7) and apply (9) to obtain

(10)

Here, (7) is reduced into a standard eigenproblem of a complexmatrix as

(11a)

(11b)

The angles of the complex eigenvalues of are the desiredphase parameters (2). Relations (11a) and (11b), together withthe QR-factorization , constitute a basis for thefast recursive ESPRIT algorithms of the following section.

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2415

TABLE IIFAST O(Nr) ALGORITHM FOR COMPUTING THE DOA-REVEALING MATRIX W(t)

Qx(t = 0) =Ir0

; Rx(t = 0) = Ir ; Hw(t = 0) = 0

for t = 1; 2; 3; . . . for each time step doInput from Subspace Tracker:Q(t� 1) ; G(t) ; �z?(t)

Partitions: Q(t� 1) =Vx(t� 1)Vy(t� 1)

�z?(t) =zx(t)zy(t)

(26), (29)

hx(t) = QHx (t� 1)zx(t) (32)

z?x (t) = zx(t)�Qx(t� 1)hx(t) (33)�z?x (t) = z?x (t)

�1

2z?x (t) (34)

Rx(t) rx(t)0 � � � 0 rx(t)

=Gx(t)Rx(t� 1) hx(t)

0 � � � 0 z?x (t)2

GH(t) (49a)

[Qx(t) qx(t)] = [Qx(t� 1) �z?x (t)]GHx (t) (49b)

hvx(t) = VHy (t� 1)�z?x (t) (42a)

hqy(t) = QHx (t� 1)zy(t) (42b)

xy(t) = �z?x (t)Hzy(t) (42c)

Hw(t) QHx (t)vy(t)

qHx (t)Vy(t) qHx (t)vy(t)=Gx(t)

Hw(t� 1) hqy(t)hHvx(t) xy(t)

GH(t) (51)

W(t)Rx(t) =Hw(t)! back substitution!W(t) (45)

C. TLS Smoothing

The QR-reduction to standard form, as described above,is also amenable to an additional subspace smoothing usinga total least squares (TLS) concept [9]. The following basicfacts are exploited in the TLS approach:

1) In the noise-free data case, the columns of andspan the same subspace.

2) In reality, both and are equally noisy.

We introduce the SVD of the column-concatenated matrixas

(12)

The columns of the submatrix span a subspacein which the data components of both and can berepresented. represents the noise. Hence, “noise-reduced”split subspace matrices and are obtained via orthogonalprojection of and onto the subspace spanned by as

(13a)

(13b)

The ESPRIT GEV problem (7) can now be expressed in termsof the noise-reduced split subspace matrices

(14)

or equivalently

(15)

which constitutes a modified GEV problem that can be reducedto standard form using the QR-reduction in the same way asdescribed in the previous subsection.

III. FAST RECURSIVE ESPRIT ALGORITHMS

In this section, we develop the necessary recursions for aclass of fast recursive ESPRIT algorithms. A basic processingstep in these algorithms is the estimation of the subspace basisvector matrix at each time step. For this purpose,

we employ concepts of fast subspace tracking and reviewthe basic updating structures used in fast subspace trackingalgorithms. On the basis of the QR-reduction to standard form,we develop a class of highly efficient and stable recursions forupdating the matrix that carries the desired phaseparameters in its eigenvalues. We further discuss conceptsof rank adaptivity to handle a temporally variable numberof active sources. These concepts are also closely related tothe structural properties of the underlying subspace trackers.Finally, we show how the TLS smoothing can be integratedin the time recursive algorithms.

A. Principles of Fast Subspace Tracking

In this section, we review the most important structural ele-ments that are common to the recently introduced fast subspacetracking algorithms [8], [10]–[17]. These structural elementsof fast subspace trackers are then exploited in the developmentof the fast ESPRIT algorithms. The subspace trackers in [8]and [13]–[17] were originally introduced for the real data case.This does not cause any difficulty here because the algorithmsare readily extended to the complex data case by replacingreal multiplications and additions by complex multiplicationsand additions, transposition by Hermitian transposition, andreal Givens plane rotations by complex Givens plane rotations.No structural changes need to be made. The following reviewassumes complex data only.

A basic processing step used in all fast subspace trackers isthe so-called “initial data compaction” step. Let

be an estimate of the “old” (one time step delayed) basismatrix . Further, suppose that the data characteristicsor the source localizations change smoothly with time. Then,it is justified to apply as a “data compressor” on theactual input data vector as

(16)

All the signal-relevant information in is ideally mappedinto the usually much shorter vector , provided only thatthe number of independently active sources is less than or

2416 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 1. Reduction of a rank-one updated triangular matrix to triangular form using two sequences of Givens plane rotationsG1(t) and G2(t).Example of r = 5.

equal the rank or the subspace dimension parameter. Basismatrix updating is then carried out in the signal-relevantsubspace of dimension. This requires a quantification of theinnovation in . In fast subspace tracking algorithms, thisinnovation is expressed in terms of the complementof the orthogonal projection of onto the “old” subspacespanned by . This complement vector is, hence,defined as

(17)

The estimate of the actual basis matrix is thenupdated via the subspace rotation

(18)

where is the normalized comple-ment vector, and is a subspace rotor. Notefrom (18) that the subspace rotor may be posed in a partitionedform as

(19)

where “ ” stands for uninteresting quantities. Now, it is seenthat the subspace update (18) can be expressed equivalently as

(20)

The various algorithms are mainly different in the way thesubspace rotor is determined. In all cases, rep-resents basically a sequence of unitary plane rotations thattriangularize a “full” matrix as

(21)

where is usually defined as an upper-right triangularmatrix. The structure of depends on the particular typeof algorithm chosen. For instance, the subspace tracker in thelow-rank adaptive filter LORAF 2 [13, Table 2, p. 2932] and

the subspace trackers ST 2 [16, Table 2, p. 78], FGSET 1 [8,Table 2], and RIVST 2 [17, Table 2] use an of the form

(22)

where is a positive exponential forgetting factor less than1. As these algorithms are based on eigenvalue decomposi-tion (EVD) and orthogonal iteration concepts, the diagonalelements of converge to the dominant eigenvalues of theunderlying signal covariance matrix.

In the case of SVD subspace tracking, we find that theassociated has practically the same structure. See, forinstance, the algorithm Bi-SVD 1 [14, Table 2, p. 1225] andthe TQR-SVD subspace tracker (reviewed in [14, Table 5,p. 1228]). An important difference in SVD subspace trackingis that the diagonal elements of converge toward thedominant singular values of the underlying data matrix, andhence, the triangular matrix elements cover a much smallerdynamic range. The SVD subspace trackers can be interpretedas square-root forms of the eigenvalue based (EVD) subspacetrackers. In both cases, however, an , as shown in (22),represents a “full” matrix and, hence, requires a full set ofGivens plane rotations in the reduction to triangularform (21). The algorithms of this category have a principalcomplexity of complex arithmetic operations per timeupdate.

A simple premultiplication of (20) by revealsthat can be interpreted as a matrixof cosines of angles between the basis vectors of subsequentsubspaces. The angles between the associated basis vectors intwo successive subspaces are usually very small in practice,where we use exponential decay factors close to a value of 1.Therefore, it is possible to replace the in (22) by theidentity matrix with little or no performance loss. This yieldsa class of very fast subspace trackers with an of the form

(23)

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2417

Fig. 2. Tracking result for theO(Nr2) recursive ESPRIT algorithm with Bi-SVD 1 subspace tracker. Dashed lines indicate true phase parameters. Solidlines indicate estimated phase parameters. SNR= 5:7 dB.

Clearly, this has a structure of the “triangular plusrank one” type. This simplifies the reduction step significantlybecause in this case, a reduction to triangular form is possibleusing a sequence of only unitary Givens plane rotations.The overall rotor sequence consists of two subsequences

. Fig. 1 illustrates the structure of thesubsequences and how they are applied to , where thesymbol denotes a nonzero matrix element. Refer to [13]for details.

All elementary rotations used in this reduction are of the“annihilate bottom component by complex circular plane rota-tion” type. Each of these elementary plane rotations is, hence,defined as

(24)

where and are complex numbers. In the rotor,isa real, and is a complex variable. denotes the complexconjugate of . The bottom component is rotated to zero if therotor variables are determined as

(25a)

(25b)

(25c)

The subspace tracking algorithms of this class have a principalcomplexity of complex arithmetic operations per timeupdate. The following algorithms belong to this category:

• LORAF 3 [13, Table 3, p. 2939];• ST 3 [16, Table 3, p. 80];• UFGSET [8, Table 4];• RIVST 3 [17, Table 3];• Bi-SVD 3 [14, Table 3, p. 1226].

B. Recursive ESPRIT Algorithms with Complexity

Fast ESPRIT time recursions are next developed on the basisof the QR-reduction to standard form and the recursions of thesubspace trackers of the category. For this purpose,recall that represents the actual estimate of the basismatrix . Hence, the split subspace basis matricesand are directly obtained from the partition

(26)

Now, we could directly compute the QR-factorization of

QR-factorization (27)

and, according to (11b), compute the DOA-revealing matrixas

(28)

2418 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 3. Estimated singular values and adaptive threshold for the tracking experiment shown in Fig. 2.

This method is redundant in complexity as it requirescomputations in the explicit QR-factorization of andanother operations in the computation of the matrixinner product .

We therefore develop a much faster algorithmic realizationof (28). This fast algorithm is based on a direct time updatingof the QR-factors of and exploits the fact that the matrixinner product can also be updated directly intime.

An initial step in the derivation of this fast recursive ESPRITalgorithm is the segmentation of into subvectorsand of length each

(29)

These subvectors can be used together with the subspaceupdate (20) to establish the time-update recursions for thesplit-subspace basis matrices as

(30a)

(30b)

The goal now is the derivation of direct time-updating recur-sions for the QR-factors of . Substitute the QR-factorsinto (30a) to obtain

(31)

Define

(32)

(33)

(34)

The subvector can now be expressed in terms of its “in-space” component that represents the portion of that canbe represented in the “old” split subspace spanned byand a normalized orthogonal innovation as

(35)

Substitute this representation of into (31), and verify thathereby, the matrix product can be expressed interms of its augmented predecessor factors as

(36)

Introduce a complex multiple Givens plane rotation matrixthat transforms the augmented and updated “old” tri-

angular matrix in (36) into a “new” triangular matrix .

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2419

Fig. 4. Five independent trial runs of the tracking experiment shown in Fig. 2 plotted in one diagram.

Simultaneously, apply the same rotor sequence to the aug-mented old basis matrix in (36) to obtain

(37a)

(37b)

The underlying tactical concepts in this derivation must beclosely related to the concepts used in the derivation of thefast subspace trackers, as reviewed in the previous subsection.Thus, we may also establish an alternative to the pure rota-tional update (37b) in that we extract from the elements

and

(38)

and express the split subspace basis update as

(39)

In the computation of , according to (28), actsas a data compressor for . The resulting “compressed”

, or matrix inner product

(40)

can also be updated directly in time. The necessary recursionsare developed next. Use (39) and (30b) to establish theexpression for

(41)

Define

(42a)

(42b)

(42c)

to obtain the recursion

(43)

A final step is the solution of the system

(44)

for the desired . This requires just an-fold backsubsti-tution of dimension as

back substitution (45)

Table I is a quasicode listing of this fast algorithm for com-puting the DOA-revealing matrix . It can be seen thatthe algorithm in Table I has an overall complexity of

2420 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 5. Tracking result for theO(Nr) recursive ESPRIT algorithm with LORAF 3 subspace tracker. Five independent trial runs plotted in one diagram.SNR = 5:7 dB.

arithmetic operations as opposed to the non-recursive scheme constituted by (27) and (28) that requires

operations. In a typical example ofand , as discussed in the experimental section, thealgorithm in Table I speeds up the computation ofapproximately by a factor of .

C. Recursive ESPRIT Algorithms with Complexity

We proceed with a maximally fast updating algorithm for. This algorithm is related to the subspace

trackers of the “triangular plus rank one” category (23). Similarto (30a) and (30b), the rotational basis update (18) can bedecomposed in two rotational basis updating recursions forthe split subspace basis matrices

(46a)

(46b)

We next derive the necessary recursions for a direct timeupdating of the QR-factors of with operations.For this purpose, express and in (46a) by theircorresponding QR factors. Additionally, decompose intoits in-space and orthonormal components according to (35).This gives

(47)

Note that the expressions on both sides of (47) can be posedas products of partitioned matrices as

(48)

where . From (48), therecursions

(49a)

(49b)

are obtained. Consider the case that we work with a subspacetracker of the category. The rotations in are thenstructured as shown in Fig. 1. In (49a), these rotations areapplied to a triangular matrix. Hereby, the triangular structureis destroyed, and we obtain a “full” matrix. The rotationsin must now be determined so that this full matrixis again reduced to triangular form. With some experience,it can be seen that this reduction can be achieved with amatrix that is structured precisely like

, as explained in Fig. 1. This propertyis the key to fast sequential ESPRIT algorithms. Inthis context, a set of fast and purely rotational-based updating

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2421

Fig. 6. Tracking result for theO(Nr2) recursive ESPRIT algorithm with Bi-SVD 1 subspace tracker. Five independent trial runs plotted in onediagram. SNR = 0:5 dB.

recursions for the matrix , as definedin (40), remain to be derived. For this purpose, consider theconjugate-transposed form of (49b)

(50)

Use this update together with (46b) and (42a)–(42c) to estab-lish a recursion for an augmented form of as

(51)

Table II is a quasicode listing of a closed-form algorithm forcomputing the DOA-revealing matrix with onlyoperations using this rotational-based recursion. Note againthat the rotors in must be structured as shown in Fig. 1for a fast reduction to the triangular form in (49a). Thisis always guaranteed with our previously reviewed subspacetrackers of the category.

D. Recursive Algorithms with TLS Smoothing

TLS smoothing of split subspaces, as reviewed in Section II,is often used in batch ESPRIT algorithms [2]. SequentialTLS smoothing requires an updating of theleading singularvalues and vectors of the column-concatenated split subspacebasis matrix . These SVDcomponents can be updated in time using a bi-iteration of theform (see [14, (3)])

In the time-recursive loop, compute

QR-factorization

QR-factorization.

(52)

Here, , and represent auxil-iary recursion matrices. The orthonormal matrices

and converge to the -dominant leftand right singular vectors of , respectively. See also[14] for details. Hence, at each time step, we may approximatethe rank- SVD of as

(53)

2422 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 7. Tracking result for theO(Nr) recursive ESPRIT algorithm with LORAF 3 subspace tracker. Five independent trial runs plotted in one diagram.SNR = 0:5 dB.

The desired estimates of and appear directlyas submatrices of

(54)

Recalling (15), the associated ESPRIT GEV problem in thecase of TLS smooting becomes

(55)

The OR-reduction to standard form is now applied to (55).This yields the final recursions for a DOA-revealing matrix

as

QR-factorization

back substitution (56)

Finally, we should note that the need for an additional smooth-ing of the split subspaces using the TLS method is less pressingin sequential processing, particularly when the overmodelingfactor is large. In these cases, we can expect that a “good”subspace tracker attenuates the noise components in the splitsubspace estimates considerably, and usually, little or no gainin performance is achieved by an additional application of theTLS methods.

E. Rank Adaptivity

An important parameter in the operation of the describedrecursive ESPRIT algorithms is the actual number of activesources. This number can be a function of time. Sources mayvanish, and new sources may appear in a scenario. In classicalblock-processing ESPRIT, concepts like the Akaike infor-mation criterion (AIC) and the minimum description lenght(MDL) criterion [18], or parametric methods as described in[19], could be used to determine the number of active sourcesat each instant of time. In sequential ESPRIT, we may useconcepts of rank estimation, as have already been developed inthe context of low-rank or subspace adaptive filtering [13]. Theestimated eigenvalues, as obtained from a subspace tracker,together with the signal power, can be used to estimate thenoise power. At each time step, a test is then applied thatcompares the estimated eigenvalues with the estimated noisefloor level. The number of active sources is thendetermined as the number of estimated eigenvalues that exceedthe estimated noise floor level by a certain threshold factor

. This method can be extended to SVD subspacetracking. See [14] for details.

In any case, the estimated eigenvalues or singular values ap-pear in descending order of magnitude. Hence, the associatedestimated eigenvector matrix of fixed column dimension

can be partitioned as

(57)

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2423

Fig. 8. Tracking result for theO(Nr2) recursive ESPRIT algorithm with Bi-SVD 1 subspace tracker. Five independent trial runs plotted in onediagram. SNR = �2:0 dB.

The submatrix with time-varying column dimensioncarries the relevant information about the active sources

of interest. Recall (26) to see that now, both andobey the same time-varying column partition as shown in (57).Consequently, the QR-factorization of , as shown in(27), becomes

(58)

where the symbol “” represents uninteresting submatrices.We find

(59)

We next examine the definition of according to (40).Incorporate the time-varying partitioning schemes to obtain

(60)

Finally, consider the backsubstitution process (45). Incorporatethe partitioning schemes of according to (58) andaccording to (60) to find that the resulting can bepartitioned into an upper-left submatrix of dimension

that carries the desired DOA-information of the

actually impinging independent sources in its eigenvalues

(61)

Thus, in practice, the method to handle a time-varying numberof sources is to operate both the subspace tracker and theassociated fast ESPRIT recursions of Tables I or II with asufficiently large and fixed subspace dimension. We thendetermine the actual number of active sources using theeigenvalue or singular value estimates, as we have discussed.Finally, and are partitioned accordingly, asshown in (58) and (60). Relation (61) then reveals that thedesired DOA-revealing submatrix is conveniently obtained viabacksubstitution of the reduced system

back substitution(62)

A final issue is the computation of the complex eigenvaluesof the DOA-revealing matrix at each time instant.The angles of these complex eigenvalues are the desiredestimated phase parameters . The eigenvaluescould be computed using any standard method like the QRalgorithm with shifts [9]. This requires, at each time step, thecomputation of a few or many iterations of a QR algorithmbecause a batch method like the QR algorithm computes theeigenvalues completely anew from scratch at each time step.In this particular application, however, the point is that theeigenvalues of in subsequent time steps are simply

2424 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 9. Tracking result for theO(Nr) recursive ESPRIT algorithm with LORAF 3 subspace tracker. Five independent trial runs plotted in one diagram.SNR = �2:0 dB.

gradually displaced versions of each other on the unit circle.Thus, we need a trulysequentialmethod for tracking thecomplex eigenvalues on the unit circle that takes advantageof the information from the previous time step. Such specificcomplex eigenvalue trackers have been developed recently[25]. A quasicode listing of a complex eigenvalue trackerof this class is provided in the Appendix. This algorithmhas been used to update the eigenvalue or angle informationin time throughout the experiments shown in the followingsection. Details about the algorithm are provided in [25]. Itis emphasized that sequential complex eigenvalue estimationis a pertinent problem in all root-based sequential frequencyestimation and direction-finding algorithms and is not specificto the ESPRIT algorithm.

IV. EXPERIMENTAL RESULTS

The rank-adaptive sequential ESPRIT algorithms of thispaper have been tested extensively. The raw data for theexperiments has been created using the ESPRIT data model(1a) and (1b) with time-varying phase parameters in. Twokinds of nonstationary scenarios were employed. First, wehave a “jump” scenario, where a fixed number ofindependent sources are used. The locations of these sourceschange abruptly at different time instants. Second, a “continu-ous movement” scenario is used, where the phase parametersof a fixed number of sources are varied smoothly in timeaccording to a cosine law. A final “variable-rank” scenario is

deduced from the continuous movement scenario by switchingsources off and on. The DOA-revealing matrices are estimatedfrom the data using first the recursive ESPRITalgorithm of Table I combined with a complex version ofthe Bi-SVD 1 subspace tracker. Second, the recursiveESPRIT algorithm of Table II is used in combination with acomplex version of the LORAF 3 subspace tracker. In bothcases, the phase parameters are finally estimated using thecomplex root tracker, as listed in the Appendix.

Fig. 2 shows a tracking result for the jump scenario whenthe fast algorithm of Table I in combination with theBi-SVD 1 subspace tracker [14, Table II] was used. Dottedlines indicate the true phase parameters, whereas the solidlines indicate the estimated phase parameters. The param-eter configuration used in this experiment was

. The algorithm was operated in afully rank-adaptive mode with a singular value thresholdfactor of . The logarithmic signal-to-noise ratioSNR was 5.7 dB. Fig. 3 shows thecorresponding singular value estimates as obtained from thesubspace tracker. The dashed line is the adaptive threshold.Four singular values exceed the threshold and are countedas valid sources. The last singular value stays clearly be-low the threshold, but we can see that this singular valuebumps up when the sources change their location abruptly.This characterizes the adaptation process in the fast subspacetracker. Although the scenario is quite complex, the algorithm

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2425

Fig. 10. Tracking result for theO(Nr2) recursive ESPRIT algorithm with Bi-SVD 1 subspace tracker. Dashed lines indicate true phase parameters. Solidlines indicate estimated phase parameters. Five independenttrial runs plotted in one diagram. SNR= 4:0 dB.

apparently operates at complete stability. Most importantly, theestimated phase parameter curves are free of any oscillations,even in the areas where phase parameter curves intersect.Fig. 4 shows five independent trial runs of the same scenariowhen different noise realizations were used. The five indepen-dent tracking curves were plotted into one diagram to illustratethe variations caused by different noise realizations. This testagain confirms the excellent performance of the algorithm.

Now, we change the algorithm and use the “ultra-fast”recursions of Table II in combination with the LO-

RAF 3 subspace tracker of [13, Table III]. Fig. 5 shows thetracking result. This algorithm provides stable estimates aswell. A comparison with Fig. 4 reveals, however, that thecomplexity reduction from to causes a loss inperformance. Particularly, the estimation variance in responseto different noise realizations increases.

Next, we wish to study the performance degradation of thealgorithms when the noise level increases. Fig. 6 shows thetracking result of the algorithm (Bi-SVD 1 Table I)at an SNR of 0.5 dB. Fig. 7 is the corresponding result forthe algorithm (LORAF 3 Table II). It is seen thatthe estimation variance increases as the noise level increases.Now, the noise level is increased further. Fig. 8 shows theresults for the algorithm at an SNR of dB. Fig. 9is the corresponding result for the algorithm. It is seenthat for relatively high values of the SNR, the conceptworks more accurately and produces estimates at a lower

estimation variance. In very low SNR scenarios, however, thedifference between the two concepts is much less drastic. Thiscoincides with our observation reported in [8] that the “ultra-fast” orthogonal iteration and bi-iteration subspace trackersperform very well in low-SNR scenarios.

Now, the scenario is changed. We study the tracking per-formance of the algorithms for continuously moving sources.Fig. 10 shows such a “continuous movement” scenario withfour sources and the corresponding tracking result when the

algorithm, as described above, was used. The parame-ter configuration in this experiment was

. The SNR was 4.0 dB. We can see that this scenariois completely uncritical. No oscillations appear at the pointswhere the phase parameter curves intersect. Of course, wemust accept a certain delay between the true and the estimatedphase parameter curves. The estimator behaves almost pre-cisely like a first-order recursive loop system with a feedbackparameter equal to the forgetting factor. Hence, the delayis deterministic and is easily compensated if the estimator isused in a larger state-model based navigation system, just tomention a potential application. Again, the algorithm has beenoperated in a fully (subspace and rank) adaptive mode, andhence, the algorithm should be capable of handling a variablenumber of sources. This is demonstrated in Fig. 11, where oneof the sources is switched off at time . We see thatthe algorithm manages this situation without any problems.We must accept very short transients at the places where the

2426 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 11. Same experiment as in Fig. 10 with one source absent fromt = 800 to t = 2400.

Fig. 12. Estimated singular value trajectories and adaptive threshold for the last trial run of the experiment shown in Fig. 11.

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2427

Fig. 13. Same experiment as in Fig. 10 with two sources absent. First source absent fromt = 800 to t = 2400. Second source absent fromt = 1600 to t = 3200.

Fig. 14. Estimated singular value trajectories and adaptive theshold for the last trial run of the experiment shown in Fig. 13.

2428 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

Fig. 15. Same experiment as in Fig. 13 repeated using theO(Nr) recursive ESPRIT algorithm with LORAF 3 subspace tracker.

Fig. 16. Estimated singular value trajectories and adaptive theshold for the last trial run of the experiment shown in Fig. 15.

STROBACH: FAST RECURSIVE SUBSPACE ADAPTIVE ESPRIT ALGORITHMS 2429

TABLE IIIALGORITHM FOR TRACKING THE COSINES AND SINES OF THE COMPLEX

EIGENVALUES OF A TIME-VARYING MATRIX W(r)(t) WHOSE

EIGENVALUES ARE LOCATED ON THE UNIT CIRCLE BY DEFINITION

Qc(0) = I ; Sc(0) = 0for t = 1; 2; 3; . . . update cosine tracker:

Input:W(r)(t)Ac(t) =W(r)(t)Qc(t� 1)� jQc(t� 1)Sc(t� 1)

Ac(t) = Qc(t)Rc(t): QR-factorization�c(t) = QH

c (t� 1)Qc(t)Tcs(t) = Rc(t) + j�H

c (t)Sc(t� 1) �c(t)Cc(t) = refdiag(Tcs(t))g

Sc(t) = I�C2c(t)

1=2

Qs(0) = I ; Cs(0) = 0for t = 1; 2; 3; . . . update sine tracker:

Input:W(r)(t)As(t) =W(r)(t)Qs(t� 1)�Qs(t� 1)Cs(t� 1)

As(t) = Qs(t)Rs(t): QR-factorization�s(t) = QH

s (t� 1)Qs(t)Tsc(t) = Rs(t) +�H

s (t)Cs(t� 1) �s(t)Ss(t) = imfdiag(Tsc(t))g

Cs(t) = I� S2s(t)1=2

rank-change occurs. Fig. 12 shows the singular value estimatesdepicted from the last trial run in this experiment. The dashedline again indicates the adaptive threshold. It is seen that wecan safely determine the active sources. The correspondingestimated singular value curve drops down to the noise floorlevel as soon as the source disappears. Fig. 13 now showsan even more complex scenario of this kind, where two ofthe sources are switched off and on at different times. A firstsource is switched off at and reappears at ,as before. A second source is switched off at andreappears at . Therefore, there is a time slot of 800samples from to , where two sources areabsent. Fig. 14 shows the singular value trajectories for thelast trial run in this experiment. We also repeated this exper-iment with the ultra-fast configuration, as describedabove. With this algorithm, we observed that the exponentialforgetting factor should be slightly increased to a value of

to reduce the subspace jitter. We then obtainedthe result shown in Fig. 15. Fig. 16 is the correspondingeigenvalue trajectory, as produced by the underlying LORAF3 subspace tracker. In summary, it is seen that both algorithmscan handle even difficult nonstationary scenarios without anydifficulties. A comparison of Figs. 13 and 15 reveals thatin smooth scenarios with close to a value of 1, we canexpect that both the and the “ultra-fast”algorithms produce almost identical results. In addition, weobserved that an additional TLS smoothing does not improvethe estimates in scenarios of this kind, as the overmodelingfactor is already considerably high. Thus,our methods can be recommended in applications where thescenarios are highly nonstationary and the number of sensorsis much larger than the number of sources.

V. CONCLUSION

We have presented a class of fast rank and subspaceadaptive ESPRIT algorithms. The idea of adaptive ESPRIT

is not new. Liuet al. [23] have already presented an adaptiveESPRIT algorithm based on the URV decomposition [24]. Thecomplexity of this method, however, is proportional toarithmetic operations for each time update, in contrast to ourfast and algorithms that consequently exploitthe latest results in fast subspace tracking. Additionally, wehave solved the rank adaptivity problem in closed form, aswe need only information that is provided in a natural wayby the subspace trackers to determine the number of activesources. Updating and downdating the number of sourcesis then accomplished without any exception processing andwithout any need to break up the fast recursion loops. Ina companion paper [25], we developed a fully sequentialmethod for tracking the complex eigenvalues that determinethe desired DOA’s. Extensive computer experiments haveshown that the proposed algorithms are robust and stable andare capable of handling nonstationary scenarios of any kind.

APPENDIX

Table III in this Appendix provides a set of recursions fortracking the sines and cosines of the complex eigenvalues of

on the unit circle. Details about these cosine andsine trackers can be found in [25]. The angles associatedwith the cosines and sines are the desired phase parametersor DOA’s. In the cosine tracker, and denote real

diagonal matrices of cosine tracker estimated cosines andsines, respectively. In the sine tracker, and denotereal diagonal matrices of sine tracker estimated cosinesand sines, respectively. It is recommended that the desiredphase parameter or angle information is estimated using bothsine and cosine trackers operating in parallel for sensitivityreasons (see [25]). The overall angle information is estimatedfrom the output of both cosine and sine tracker, combinedaccording to [25]

diag

(63)

where and denote identity and exchange matrices ofdimension , respectively.

REFERENCES

[1] R. H. Roy, “ESPRIT—Estimation of signal parameters via rotationalinvariance techniques,” Ph.D. Dissertation, Stanford Univ., Stanford,CA, 1987.

[2] R. Roy and T. Kailath, “ESPRIT—Estimation of signal parameters viarotational invariance techniques,”IEEE Trans. Acoust., Speech, SignalProcessing, vol. 37, pp. 984–995, July 1989.

[3] , “ESPRIT—Estimation of signal parameters via rotational in-variance techniques,” inSignal Processing Part II: Control Theory andApplications, F. Auslander, F. A. Gr¨unbaum, J. W. Helton, T. Kailath, P.Khargonekar, and S. Mitter, Eds.). New York: Springer-Verlag, 1990,pp. 369–411.

[4] H. Ouibrahim, D. D. Weiner, and T. K. Sarkar, “Matrix pencil approachto direction-of-arrival estimation,” inProc. 20th Annu. Asilomar Conf.Signals, Syst. Comput., Pacific Grove, CA, Nov. 1986, pp. 203–206.

[5] H. Ouibrahim, “A generalized approach to direction finding,” Ph.D.Dissertation, Syracuse Univ., Syracuse, NY, 1987.

[6] Y. Hua and T. K. Sarkar, “On SVD for estimating generalized eigenval-ues of singular matrix pencil in noise,”IEEE Trans. Signal Processing,vol. 39, pp. 892–899, Apr. 1991.

2430 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 9, SEPTEMBER 1998

[7] R. S. Martin and J. H. Wilkinson, “Reduction of the symmetric eigen-problemAx = �Bx and related problems to standard form,”Nu-merische Mathematik, vol. 11, pp. 99–110, 1968.

[8] P. Strobach, “Fast orthogonal iteration adaptive algorithms for thegeneralized symmetric eigenproblem,” to be published.

[9] G. H. Golub and C. F. VanLoan,Matrix Computations, 2nd ed. Bal-timore, MD: John Hopkins Univ. Press, 1989.

[10] P. Comon and G. H. Golub, “Tracking a few extreme singular values andvectors in signal processing,” inProc. IEEE, vol. 78, pp. 1327–1343,Aug. 1990.

[11] I. Karasalo, “Estimating the covariance matrix by signal subspaceaveraging,”IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 8–12, Feb. 1986.

[12] E. M. Dowling, L. P. Ammann, and R. D. DeGroat, “A TQR-iterationbased adaptive SVD for real-time angle and frequency tracking,”IEEETrans. Signal Processing, vol. 42, pp. 914–926, Apr. 1994.

[13] P. Strobach, “Low rank adaptive filters,”IEEE Trans. Signal Processing,vol. 44, pp. 2932–2947, Dec. 1996.

[14] , “Bi-iteration SVD subspace tracking algorithms,”IEEE Trans.Signal Processing, vol. 45, pp. 1222–1240, May 1997.

[15] , “Square Hankel SVD subspace tracking algorithms,”SignalProcess., vol. 57, no. 1, pp. 1–18, Feb. 1997.

[16] , “Fast recursive orthogonal iteration subspace tracking algorithmsand applications,”Signal Process., vol. 59, no. 1, pp. 73–100, May 1997.

[17] , “Bi-iteration instrumental variable subspace tracking and adap-tive filtering,” IEEE Trans. Signal Processing, to be published.

[18] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice-Hall, 1991.

[19] Q. Wu and D. Fuhrman, “A parametric method for determining thenumber of signals in narrow-band direction finding,”IEEE Trans. SignalProcessing, vol. 39, pp. 1848–1857, 1991.

[20] G. W. Stewart, Introduction to Matrix Computations. New York:Academic, 1974.

[21] J. H. Wilkinson, The Algebraic Eigenvalue Problem. Oxford, UK:Clarendon, 1965.

[22] B. N. Parlett and W. G. Poole, “A geometric theory for the QR, LUand power iterations,”SIAM J. Num. Anal., vol. 10, no. 2, pp. 389–412,Apr. 1973.

[23] K. J. R. Liu, D. P. O’Leary, G. W. Stewart, and Y.-J. J. Wu, “URV ES-PRIT for tracking time-varying signals,”IEEE Trans. Signal Processing,vol. 42, pp. 3443–3448, Dec. 1994.

[24] G. W. Stewart, “An updating algorithm for subspace tracking,”IEEETrans. Signal Processing, vol. 40, pp. 1535–1541, June 1992.

[25] P. Strobach, “A split-Schur theorem for tracking complex eigenvalueson the unit circle,” submitted for publication.

Peter Strobach (M’86–SM’91) received the Engineer’s degree in electricalengineering from Fachhochschule Regensburg, Regensburg, Germany, in1978, the Dipl.-Ing. degree from Technical University Munich, Munich, Ger-many, in 1983, and the Dr.-Ing. (Ph.D.) degree from Bundeswehr University,Munich, in 1985.

From 1976 to 1977, he was with CERN Nuclear Research, Geneva, Switzer-land. From 1978 to 1982, he was with Messerschmidt–Boelkow–BlohmGmbH, Munich. From May 1986 to December 1992, he was with SiemensAG, Zentralabteilung Forschung und Entwicklung (ZFE), Munich. Currently,he is with Fachhochschule Furtwangen, R¨ohrnbach, Black Forest, Germany.

Dr. Strobach is a member of the IEEE Signal Processing Society and anEditorial Board member ofSignal Processing. He is a Member of the NewYork Academy of Sciences and is listed inWho’s Who in the World.