Upload
asnarf-ebra
View
214
Download
0
Embed Size (px)
Citation preview
7/29/2019 DeGreve AES 2007
1/16
[33] Torrieri, D.
Principles of spread-spectrum communication systems.
New York: Springer, 2005.
[34] Binder, P. M., and Jensen, R. V.
Simulating chaotic behavior with finite-state machines.
Physical Review A, 34 (1986), 44604463.
Framework and Taxonomy for Radar Space-TimeAdaptive Processing (STAP) Methods
The goal of radar space-time adaptive processing (STAP)
is to detect slow moving targets from a moving platform,
typically airborne or spaceborne. STAP generally requires
the estimation and the inversion of an interference-plus-noise
(I+N) covariance matrix. To reduce both the number of samples
involved in the estimation and the computational cost inherent
to the matrix inversion, many suboptimum STAP methods
have been proposed. We propose a new canonical framework
that encompasses all suboptimum STAP methods we are awareof. The framework allows for both covariance-matrix (CM)
estimation and range-dependence compensation (RDC); it also
applies to monostatic and bistatic configurations. Finally, we
discuss a taxonomy for classifying the methods described by the
framework.
NOMENCLATURE
Complex conjugate transpose Kronecker productR Matrix R
Scalar y Vector y
1D One dimension2D Two dimensions3D Three dimensions- STAP Sum and difference STAPAADC Adaptive angle-Doppler compensationACP Auxiliary channel processorADC Angle-Doppler compensation
ADPCA Adaptive displaced phase center antennaAEP Auxiliary eigenvector processorAR Autoregressive
ASEP Auxiliary sensor/echo processorASFF Auxiliary sensor FIR filter processor
Manuscript received February 10, 2005; revised March 10, 2006;
released for publication January 9, 2007.
IEEE Log No. T-AES/43/3/908413.
Refereeing of this contribution was handled by P. Lombardo.
0018-9251/07/$25.00 c 2007 IEEE
CFAR Constant false alarm rateCM Covariance matrixCPI Coherent processing intervalCSM Cross-spectral metric
DBU Derivative-based updatingDFT Discrete Fourier transformDOF Degrees of freedomDPCA Displaced phase center antennaDT-SAP Doppler transform-space adaptive
processingDTFT Discrete-time Fourier transformDW Doppler warping
Ef g Expectation operator
EFA Extended factored approachF$A Filter-then-adaptFA Factored approachFDFF Frequency-domain space-time FIR filterFIR Finite impulse responseFT Fourier transformFTS Factorized time-spaceGSC Generalized sidelobe canceller
HSTAP Hybrid STAPI+N Interference plus noiseIID Independently and identicaly distributedJDL-GLR Joint domain localized-generalized
likelihood ratiomDT-SAP m-bins Doppler transform-space
adaptive processingNHD Nonhomogeneity detectorOP Optimum processorOSP Overlapping subarray processor
PAMF Parametric adaptive matched filterPC Principal componentsP
dProbability of detection
Pfa Probability of false alarm
PS Projection statisticsRBC Registration-based compensationRD Range dependenceRDC Range-dependence compensationROM Rank-ordering metricSAS Symmetric auxiliary sensorSINR Signal-to-interference-plus-noise ratioSIRP Spherically invariant random process
SOM Suboptimum methodSTAP Space-time adaptive processingULA Uniform linear array
I. INTRODUCTION
Space-time adaptive processing (STAP)1 is anincreasingly popular radar signal processing technique
1All symbols, acronyms, and abbreviations used are defined in the
Nomenclature.
1084 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
2/16
for detecting slow-moving targets [13]. The spacedimension arises from the use of an array of Nsantenna elements and the time dimension arises fromthe use of a coherent train of Nt pulses. The power ofSTAP comes from the joint processing along the spaceand time dimensions. A recent overview of STAP
appears in [4].The data collected by STAP radars can be viewed
as a sequence, in range, ofNs Nt (2D) arrays, which
can be viewed as matrices, but are generally treatedas NsNt 1 (1D) vectors. These arrays, matrices,or vectors are called snapshots. Each snapshotcorresponds to a specific range. The optimum STAP
processor computes the optimum weighted linearcombination of the snapshot elements to determine ifa hypothetical target is present or not. This calculationgenerally involves the estimation and the subsequentinversion of the NsNt NsNt covariance matrix (CM)of interference-plus-noise (I+N) snapshots. Theestimation of the CM at any given range is typicallyperformed using snapshots at neighboring ranges.
However, there are two major reasons why theoptimum processor (OP) cannot be used in practice.First, the inversion of the I+N CM requires on theorder of (NsNt)
3 operations, which can be prohibitive
for real-time applications. Second, the number oftraining snapshots needed to estimate the CM isbetween 2NsNt and 5NsNt [5]. For typical values ofNs and Nt, this amount of data is most probably notavailable.
These two problems have motivated the design
of many suboptimum methods (SOMs) that reducethe size of the CM. Such methods lead to a drasticreduction of the computational cost and of the numberof training snapshots required.
The most popular SOMs are the following.Ward [2] proposed a taxonomy of SOMs usingcombinations of beamforming and overlappingsample selections. Wang and Ca [6] developed the
joint domain localized-generalized likelihood ratio(JDL-GLR) algorithm, which uses a distinct processorfor each nonoverlapping angle-Doppler selection.
Klemm [1] proposed many SOMs working in eitherthe space-time domain, the space domain only,the time domain only, or the space-time frequencydomain. Goldstein [7] developed an SOM based on
a multistage Wiener filter. Haimovich [8] extendedthe generalized sidelobe canceller (GSC) structureby using an eigenanalysis of the I+N CM. OtherSOMs based on such eigenanalysis are those using
principal components (PC) [9] and those using thecross-spectral metric (CSM) [10].
Some authors proposed various schemes forunifying SOMs [1114]. However, each of theseschemes only unifies a small subset of the availablemethods. In [15] and [16], we presented preliminary
looks at a new canonical framework for describing,in a structured way, all SOMs we are aware of.In the work presented here, we not only presenta detailed description of this new framework, butwe also extend it to the case where the I+N CM isestimated using model-based approximations andto the case where an appropriate criterion is used toreject nonhomogeneous snapshots from the estimationof the I+N CM to enhance performance.
The framework essentially consists in a sequenceof canonical steps, or operations, that can be tailoredto explain existing SOMs and even to create newones. The sequence of operations and the detailednature of each was determined based upon a detailedexamination of all SOMs we could get a hold of.The focus here is thus on the understanding of thestructure (or architecture) of existing (and possiblyfuture) SOMs. Specifically, we do not attempt toget into issues of performance, computational cost,training data support, and robustness to parametermismatches. These issues are important, but theyare beyond the scope of this paper, which focus
on the structure of methods. In fact, when forcingan existing method to fit the framework, we mayget a better understanding of this method and bebetter able to compare it with others also placed inthe same framework, but there is no guarantee thatthe new structure will be better with regard to theabove issues of performance, computational cost,etc.
In Section II, we review the principles of optimumSTAP. In Section III, we review the most commonSOMs. In Section IV, we review the existing unifyingframeworks for SOMs. In Section V, we discuss thestructure of the new canonical framework for the
case where the I+N CM is known. In Section VI,we extend this framework to the case where the I+NCM must be estimated. In Section VII, we take intoaccount the nonstationarity of snapshot statisticsby augmenting the framework with provisionsfor range-dependence compensation (RDC). InSection VIII, we discuss a new taxonomy forclassifying SOMs. In Section IX, we give insights intohow the new framework can help in the design of newSOMs. Conclusions are found in Section X.
II. REVIEW OF PRINCIPLES OF OPTIMUM STAP
Below, we briefly discuss the nature of the dataused in STAP, related mathematical models, andprocessing performed on this data. More details canbe found, e.g., in [17] and [18].
A. Data Collection
During each coherent processing interval (CPI),the radar transmits a train of Nt coherent pulses.
CORRESPONDENCE 1085
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
3/16
Fig. 1. Representations of snapshot for given range gate index l:(2D) array or matrix y
land (1D) vector y
l.
The returns are collected at each of the Ns elementsof the antenna array. The Ns antenna elements andthe Nt waveform pulses correspond to the first twodimensions of STAP, i.e., space and time. The third
dimension, range, arises as follows. Consider somerange R0, where we want to test for the presenceof potential targets. This translates into a delay oft0 = 2R0=c seconds (for a monostatic configuration).The returns at each of the Ns antenna elements arethen sampled t0 seconds after the transmission of eachof the Nt pulses. The resulting data can be viewed asan Ns Nt (2D) array y(ns,nt) or matrix y(ns,nt) of
complex values. The above sampling is then repeatedat successive time increments t related to the rangeresolution. This results in Nr samples per elementper pulse. The entire data for each CPI can thus beviewed as an Ns Nt Nr (3D) array y(ns,nt, l) of
complex values, where l is the range gate index, withl 2 L = f0, : : : ,Nr 1g. In STAP, it is customary toview the slice yl(ns,nt) ofy(ns,nt, l) at any given lnot only as a matrix y
l, but also as a vector y
lof
size NsNt 1 obtained by scanning yl
column by
column (Fig. 1): yl, yl, and y
lare all referred to as
the snapshot at l.
The snapshot yl
can be expressed as the sum of apotential target component y
t,land an I+N component
yq,l
,y
l= y
t,l+ y
q,l:
The target snapshot yt,l
is commonly expressed as
yt,l
= t,lv(ts,
td) = t,lb(
td) a(
ts)
where the magnitude of the complex value t,l comesfrom the radar equation [2], v(ts,
td) is the space-time
steering vector evaluated at the target spatialand Doppler frequencies ts and
td, respectively, is
the Kronecker product [19], and a(ts) and b(td) are
the space and time steering vectors, respectivelygiven by
a(ts) = (1 ej2ts(Ns1))T
Fig. 2. Structure of detector. (a) Arbitrary processor followed by
decision device. Thr is a specified threshold.
(b) Structure of OP.
andb(td) = (1 e
j2td
(Nt1))T
in the case of an antenna array that is a uniform lineararray (ULA). The I+N snapshot y
q,lcan be expressed
as the sum of an interference snapshot yi,l
and a noise
snapshot yn,l
. Here, we assume that the interference
consists of only clutter, so that
yq,l
= yc,l
+ yn,l
where yc,l
is the clutter snapshot. yn,l
is commonly
assumed to be spatially and temporally white.
B. Detection and Optimum Processor
Detection is performed at each range gate lindividually. The structure of the detector is depictedin Fig. 2(a). The inputs are y
land v(s,d). The
processor produces a scalar zl for the given yl and
v(s,d). Depending upon the value of jzlj with respectto a threshold, the target is declared to be eitherabsent or present. This decision must be madefor each target-hypothesis triplet (s,d, l).
The optimum implementation of the processor inFig. 2(a) is shown in Fig. 2(b). The scalar output zl ofthe OP is given by [5]
zl = wl (s,d)yl (1)
where denotes the complex conjugate transposeoperation. In the case of the OP, the weight wo,l isgiven by [1, 2, 3]
wo,l(s,d) = R1
q,l
v(s,d) (2)
where o stands throughout for optimum, is anarbitrary constant (which we assume equal to 1 in thesequel), and R
q,lis the (theoretical) space-time I+N
CM of yq,l
,
Rq,l
= Efyq,l
yq,l
g (3)
where Ef:g denotes the expectation operation.In practice, R
q,lmust be estimated for each l. If
the snapshots used for the estimation are independent
1086 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
4/16
and identically distributed (IID), the maximum
likelihood estimate Rq,l
of Rq,l
is obtained by the
averaging [5]
Rq,l
=1
Nl
Xk2Sl
Rk
(4)
where Rk
= yk
yk
is the single-sample CM for rangegate k (with y
kbeing the corresponding I+N snapshot
for range gate k), Nl
is the number of snapshotsused for estimation, and Sl is a set ofNl surroundingsnapshots. Strictly speaking, y
kshould only contain
interference and noise and thus no moving scatterer,so that y
k= y
q,k. In addition, the secondary data
is assumed to be target free. However, in practice,we cannot be sure that all y
ks, k2 Sl, are devoid of
moving scatterers. Hence, in equations such as (4),we prefer to write y
krather than y
q,k. Furthermore,
when a moving scatterer is known to be present inthe secondary data, it can be removed using, e.g., ablocking matrix [20].
Since Rq,l
is used to test for the presence of a
target at l, the set Sl should include neither l norseveral adjacent range gates called guard cells [4].Advanced strategies such as the nonhomogeneitydetector (NHD) [21] can be used to determine whichrange gates to include in the estimation.
C. Optimum Processor in Practice
It is most likely that the OP cannot be usedin real-time applications. There are two majorreasons for this [1, 2]. First, the inversion of the I+NCM involves a number of operations proportional
to (NsNt)3 and the computation of wo,l must berepeated for each target hypothesis (s,d, l). Second,the number Nl of training snapshots required forestimating the I+N CM is on the order of 2NsNtto 5NsNt [5]. In practice, this amount of trainingdata is unattainable due to technical constraints, andfurthermore the computational cost associated withthe inversion of the I+N CM at each range is toohigh for real-time operation. The main goal of SOMsis to reduce the size of the I+N CM and/or the sizeof the training sample support. While methods thatreduce the size of the I+N CM generally lead to a
reduction of the required size of the training set, areduction of the required size of the training set is notautomatically associated to a reduction in size of theI+N CM.
III. REVIEW OF MOST COMMON SUBOPTIMUMMETHODS
Many SOMs for STAP have been proposed in thelast decade or so. Historically, the displaced phased
center antenna (DPCA) [22] was one of the firsttechniques developed to address the issue of cluttermitigation in space-time radar. This method performsnonadaptive clutter suppression and implementsan echo-subtraction scheme. Specifically, the I+NCM does not appear in the expression for theweight vector. Adaptive DPCA (ADPCA), which isclosely related to DPCA, involves the estimation andapplication of the I+N CM [23, 24] to determine if a
target is present or absent.Klemm [1] introduced SOMs based on1) space-time transforms, such as the auxiliarychannel processor (ACP), 2) space-only transforms,such as the overlapping subarray processor (OSP),3) transforms using finite impulse response (FIR)filters, such as the symmetric auxiliary sensor/echoprocessor (SAS) with space-time FIR filter (ASFF),and 4) space-time frequency transforms, such as the2D SAS/echo processor (ASEP).
Gabriel [25] introduced the factored approach(FA), where the weights are applied to a vector oflength Ns obtained as follows. Its nsth element is
the kth discrete Fourier transform (DFT) coefficientof the DFT of length Nt of the Nt temporal returnsfor the nsth antenna element, with k being someappropriately-chosen integer. The motivation forperforming (single-output) temporal DFTs prior tofiltering is the resulting reduction in correlation in thefrequency domain [26]. (The number of outputs refersto the number of DFT coefficients that are computedfor the sequence of interest.) The FA was extended toseveral-output DFTs in the extended FA (EFA) [26],the correlation between adjacent Doppler bins beingthereby taken into account. Bao [27] also developedSOMs based on the same principles as those of the
FA; they are the Doppler transform-space adaptiveprocessing (DT-SAP) and the factorized time-space(FTS). The m-bins Doppler transform-spaceadaptive processing (mDT-SAP) [28] is equivalent tothe EFA.
The - STAP approach of [29], [30] reducesthe spatial degrees of freedom (DOF) to sum anddifference beams only. - STAP is generally usedtogether with a reduction in the temporal DOFs.
The idea of simultaneous reduction of DOFs inboth space and time lies at the core of joint-domainlocalized (JDL) processing. Based on this idea, Wangand Ca [6] designed the JDL-GLR SOM. Similarly,
Bao [28] proposed a hybrid low-dimensional STAPapproach (HSTAP) combining mDT-SAP, - STAP,and ACP.
Ward [2] proposed a taxonomy of SOMsin STAP, which is organized according to thetype of preprocessor applied before the adaptiveweight computation. These SOMs are all basedon subselections. A subselection is the subsetof a snapshot corresponding to a specific subsetof antenna elements and to a specific subset oftransmitted pulses. Several subselections may be
CORRESPONDENCE 1087
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
5/16
considered simultaneously and these may overlap.Ward distinguishes between pre-Doppler andpost-Doppler methods depending upon whether thetemporal DFT is applied after or before the weightapplication, respectively. He also distinguishesbetween element-space and beamspace methodsdepending upon whether the weight is applied directlyto the array outputs or to the spatial beams obtainedvia a spatial DFT.
An important set of SOMs uses the low-rankproperty of the I+N CM [1]. The PC approach [9]selects the most significant I+N eigenvectors usinga rank-ordering metric (ROM) given by the expectedenergies of the I+N eigenvectors, which is also equalto the corresponding eigenvalues.
To enhance the performance of the eigenvector-based (or low-rank) SOMs, a dependency of the ROMon the (desired) steering vector was introduced toselect an appropriate I+N subspace [10]. The GSCfilter that is used in [8, 10] provides a useful insightinto rank-reduced methods. The first step of thisfilter is to project the snapshot on two orthogonal
subspaces. The first subspace is spanned by thedesired steering vector v(s,d), and the secondsubspace is the nullspace of v(s,d). We thushave B v(s,d) = 0 when rows of B are made ofany orthogonal basis set for that nullspace. Theprojection of the snapshot on the first subspaceleads to a mainbeam d0 and the projectionof the snapshot on the second subspace is callnoise-subspace data or auxiliary data x0.Examples of SOMs involving a GSC are those usingthe multistage Wiener filter, which generates thenoise-subspace through a decomposition of the GSC
auxiliary channels into a sequence of orthogonalprojections [31], and those using the CSM forchoosing the eigenvectors that yield the largest outputsignal-to-interference-plus-noise ratio (SINR). TheCSM keeps the eigenvectors corresponding to thek largest among the values ji rx0d0=
pij, i = 0, :::,
NsNt 1, where i and i are, respectively, theeigenvectors and the corresponding eigenvaluesof R
x0= B R B and rx0d0
= B R v(s,d) is a
cross-correlation vector.For sake of completeness, we should mention
that there are SOMs based on a model of the I+NCM or on a model of its structure. An example of
such a method is the parametric adaptive matchedfilter (PAMF) [32], where the coefficients of anautoregressive (AR) filter of order p are used torepresent an estimate of R [33].
The NHD [21, 34] can enhance performance andcan reduce the number of auxiliary snapshots finallyused in (4) to obtain a good estimate of the I+N CM.This is achieved by detecting and rejecting outliersnapshots from a large training set. These outliersare auxiliary snapshots that are statistically different,
e.g., due to the presence of targets or jammers atthese training ranges. When outliers snapshots arepresent in the training data, the computation of theestimate of the I+N CM requires a larger trainingset. Rejecting (or replacing) secondary snapshots thatare not sufficiently homogeneous should improve theestimation. Projection statistics (PS) [35] are based onthe same idea.
Performance Trade-Offs: The efficiency of an
SOM must be analyzed not only in terms of detectionperformance but also in terms of the reduction ofthe computational cost and amount of training datarequired. Indeed, reducing the computational cost andthe amount of training data are the main motivationfor conceiving SOMs (Section I). The many criteriafor judging the efficiency of an SOM and the greatvariety of them make it difficult to compare SOMs.Many of the above-cited papers devoted to SOMsdiscuss trade-offs between detection performance,computational cost, and amount of training data, butonly for one or for a small number of SOMs. Forexample, in [1, p. 26], Klemm proposes some ad hoc
rules for choosing the parameters of SOMs basedon linear subspace transforms in order to obtain agood trade-off between detection, performance, andcomputational cost.
Distinct SOMs must be compared under the sameoperational conditions, whether it is for detectionperformance, computational cost, or amount oftraining data. In other words, SOMs cannot becompared in absolute terms. As an illustration, wefocus on the comparison of the computational costfor JDL and PC methods for (highly-) directionalantennas and for omnidirectional antennas. For bothtypes of antennas, JDL requires the computation
and inversion of one reduced-dimension I+N CMfor each direction under test and PC requires thesame operations but for a single I+N CM thatcorresponds to all directions under test, and that issubstantially larger than each I+M eMs computedfor JDL. So, in the case of a directive antenna,JDL seems to be more efficient than PC since, forJDL, only a limited number of small CMs needto be computed and inverted due to the limitednumber of directions under test resulting from thedirectivity of the antenna. On the contrary, in thecase of an omnidirectional antenna, JDL seems tobe less efficient than PC since, for JDL, a largenumber of small CMs need to be computed andinverted due to the large number of directionsunder test resulting from the omnidirectionalityof the antenna. In other words, in the case of anomnidirectional antenna, computing and inverting alarge CM corresponding to all directions under testmay be more efficient than computing and invertinga large number of small CMs corresponding to eachdirection under test. Of course, the comparison interms of computational cost is meaningful only if
1088 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
6/16
the detection performance and the required amount oftraining data are the same in each case.
Recent research [36] has shown that there is nosingle best approach, but that the situation-specificadvantages of the different SOMs should be exploiteddynamically in a knowledge-based approach.Identifying the different the situation-specificadvantages and disadvantages of all the SOMs isbeyond the scope of this paper. The framework we
propose is intended to give conceptual insight into thestructure of existing and, hopefully, yet-to-be-foundSOMs.
IV. REVIEW OF EXISTING UNIFYING FRAMEWORKSFOR SUBOPTIMUM METHODS
A plethora of SOMs were introduced over the lastdecade or so to cope with the issues of computationaltractability and of availability of training data. At firstsight, it is often difficult to understand the relationshipbetween the various methods. In this section, webriefly review unifying frameworks developed by
others; in Section V, we discuss ours.A first attempt at unification was provided by
Ward in [2], where the SOMs are presented in termsof the dichotomies pre-Doppler versus post-Dopplerand element-space versus beamspace. In [11] and[12], this scheme was extended to JDL-GLR, EFA,and ADPCA. Rangaswamy [13] proposed a canonicalframework for PC, PAMF, and CSM. Guerci [3, 14]proposed a partial classification scheme, which bringsto light the constitutive elements of the expressionfor the inverse I+N CM and the inner structure ofthe weight vector for several SOMs. However, thisscheme does not consider subselection methods and
does not identify a common representation for allSOMs.
These various attempts only unify a limitedsubset of the existing SOMs. Below, we propose anovel canonical framework for describing SOMs.This framework is the result of a study of all SOMswe could find in the literature. It is designed toencompass all of these SOMs and it should alsoprovide room for accommodating SOMs yet to bederived.
V. NEW CANONICAL FRAMEWORK FOR KNOWNSTATISTICS
In spite of the wide variety of SOMs that exist, wehave succeeded in designing a canonical frameworkthat allows us to break down all SOMs we are awareof into a sequence of five processing steps (Fig. 3).This sequence of steps is applied to the snapshot y
lat
each range l of interest.Each of the first four processing steps can be
expressed in terms of a multiplication by a matrix:the successive matrices are denoted by T
pre, S, W
l,
Fig. 3. Two different views of the five processing steps of the
new canonical framework: vectors (or 1D arrays) on the left and
matrices (or 2D arrays) on the right. Each of the first four steps is
entirely described by a matrix, and the last one by an operator.
Here we assume that the I+N CM is known.
and Tpost
. The last step is a nonlinear thresholding
operation, denoted by D
f:g. The input is the snapshotvector y
l(left part of Fig. 3), which can also be
viewed as a space-time matrix or 2D array (rightpart of Fig. 3). The output is the vector Zb,l (or,equivalently, the matrix Z
b,l) of binary values: 0 for
target absent and 1 for target present.Instead of using a simple weight vector as in
the customary implementation of the OP, we useseveral weight matrices W
l(i), placed along the
diagonal of a block-diagonal weight matrix Wl. This
results in an output vector yw,l
, instead of an output
scalar zl (see (1)) in the case of the OP. Similarly,
the final output is a corresponding vector of binaryvalues, instead of a binary scalar in the case of theOP. These generalizations are introduced to handlethe fact that some SOMs produce several binaryoutputs simultaneously, instead of just one in thecase of the OP. Each binary value Zb,l(i) in Zb,l thuscorresponds to the response of the processor to thetriplet (s(i),d(i), l). (The Zb,l(i)s all correspond to thesame range gate l.)
Our canonical 5-step processing chain can thus berepresented by the compact expression
Zb,l = DfTpostW
lS T
prey
lg: (5)
The five processing steps are now described in detail,in the order shown in Fig. 3.
Step 1 Preprocessor, implemented by matrix Tpre
The input to the preprocessor is yl. The output is
ypre,l
= Tpre
yl.
The preprocessor corresponds, either to aprojection onto a particular subspace, or to a Fouriertransform (FT) like operation (either in 1D or 2D).These operations are further discussed below. Theycan be performed with, or without, dimensionality
CORRESPONDENCE 1089
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
7/16
reduction. Of course, reducing the size of yl
and,thus, the size of R
q,lis useful since it leads to a lower
computational cost. We now consider three possiblechoices for T
pre.
1) Projection based-preprocessor P: Tpre
can
be set to some projection matrix P. The output ypre,l
is then the projection of yl
on the correspondingsubspace. The auxiliary eigenvector processor (AEP)
[1] is an SOM performing a projection. In AEP, Tpreconsists of a matrix where the columns are a partialset of the eigenvectors of R
q,lcorresponding to the
largest eigenvalues, plus the steering vector v(s,d)corresponding to the current target hypothesis. Thistype of preprocessor leads both to a significantreduction in the size of y
pre,land to near-optimal
performance.2) Fourier-transform-based preprocessor F
pre: In
the Appendix we discuss the type of FT-like operationone might like to perform with T
pre. The most general
form of this operation is given by (25). When dealing
with the preprocessor, we append pre to F and anyrelated matrix. We thus rewrite (25) as
Yl = F preyl: (6)
The motivation for considering FT-like operationscomes from examining the CM of the output F
prey
lof
the preprocessor, i.e.,
RT,q,l
= EfFpre
yly
lF
preg
= FpreEfy
ly
lgF
pre= F
preR
q,lF
pre:
Results (described in [26], giving both time-domain
and frequency-domain correlation characteristics forone set of clutter data obtained from flight tests)show that, whether the DFT implemented by F
pre
is 1D or 2D, RT,q,l
is diagonally dominant. This is
actually a well-known property [37]. This means thatcorrelation values can be significant along and nearthe main diagonal and very much reduced elsewhere.Therefore, correlation values are very much reducedfor elements corresponding to distinct (spatial ortemporal) frequencies (1D DFT) or pairs of (spatialand temporal) frequencies (2D DFT). And so, due tothe structure of R
T,q,land due to the structure of the
transformed steering vector vT,s,l(i) (which will first
be encountered in (19)), the significant values of thecomputed weights are related to bins corresponding to(spatial or temporal) frequencies (1D DFT) or (spatialand temporal) frequency pairs (2D DFT) surroundingthe frequency pair (s,d) under test [26]. We can thuslimit ourselves to computing a reduced R
T,q,lfor a
limited number of frequencies surrounding the testedfrequencies. This leads to a significant reduction inthe computational cost since the size of the I+N CMis significantly reduced. Note that, in methods using
(partial) FT-like operations leaving some bins in thespace-time domain (such as SAS [1]), one should becautious in using a reduced R
T,q,lsince the reduction
in correlation is not true for these bins.FA and EFA are SOMs using Ft
r,pre= Ft
pre I.
Beamspace pre-Doppler is an SOM using Fsc,pre
= I
Fspre
. JDL-GLR is an SOM using Fpre
= FtpreFs
pre.
The equation
Fpre
=
e2j0
d0
e2j0
d1
e2j0
d2
e2j0
d3
e2j1d
0 e2j1d
1 e2j1d
2 e2j1d
3
e2j
0s 0 e2j
0s 1 e2j
0s 2 e2j
0s 3
e2j1s 0 e2j
1s 1 e2j
1s 2 e2j
1s 3
(7)
shows the expression for Fpre
for JDL-GLR for
the simple case where Ns = Nt = 4 (the numbers ofantenna elements and transmitted pulses, respectively)and Ks = Kt = 2 (the numbers of spatial and temporalfrequency bins, respectively); 0s ,
1s ,
0d , and
1d are the
chosen spatial and temporal frequencies of the DFTsand F
prehas size 4 16.
For the more general cases where F pre is notseparable into Fs
c,preand Ft
r,pre(as discussed in
the Appendix), we propose the following generalexpression for the (p,q)th element of F
pre,
Fpre
(p,q) =1
aA(p,q)e
j2(Ls(p,q)U
s(p,q)+L
t(p,q)U
t(p,q))
(8)
where the new quantities a, A, Ls, U
s, L
t, and U
t
are as follows. Matrix A is binary. If A(p,q) = 0, the
(ns,q,nt,q)th2 element of the 2D snapshot y
lis ignored
in the computation of the pth element ypre,l(p) of ypre,l.
If A(p,q) = 1, this element is processed. Ls and Ltcontain integer coefficients related to the elements ofy
lto be processed by the FT-like operation. U
sand U
t
contain the spatial and temporal frequencies where theFT-like operation is to be computed. The constant a ischosen so that F
preis unitary,
Fpre
Fpre
= I:
Providing a more detailed explanation of this secondtype of preprocessor is beyond the scope of this paper.
3) Projection/transform-based preprocessorTpre
:
Of course, Tpre
could also be set to
Tpre
= Fpre
P: (9)
We are not aware of any existing SOM using such apreprocessor. This offers the possibility of developingnew SOMs and is an example of how the frameworkcan be used in a systematic way to create new SOMs.
2ns,q = mod (q,Ns) and nt,q = bq=Nsc, where mod (x,y) is the restof the division of x by y and bxc is the smallest integer greater or
equal to x.
1090 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
8/16
As indicated earlier, the output of the preprocessoris
ypre,l
= Tpre
yl: (10)
Step 2 Subselector, implemented by matrix SThe input to the subselector is y
pre,l. The output
is ys,l
= Sypre,l
. The subselector matrix S performs
one or more spatial, temporal, and/or spatio-temporalsubselections, as explained in Section III.
Each subselection results in a new snapshot ofreduced dimension. Since the computational cost isproportional to the 3rd power of the snapshot size,using subselections can result in a significant savingin subsequent computational cost.
Here, we assume that Nsub subselections arecreated. Each subselection y
s,l(i), i = 0, :::,Nsub 1, is
created by applying a binary subselector matrix Sb
(i)creating a temporary output
y0s,l
(i) = Sb
(i)ypre,l
:
An optional transformation Ts(i), discussed below, can
be applied to y0s,l
(i), resulting in the final subselection
ys,l(i),y
s,l(i) = T
s(i)S
b(i)y
pre,l: (11)
The successive ys,l
(i), i = 0, :::,Nsub 1, are then
stacked into a single vector
ys,l
= (yTs,l
(0), : : : , yTs,l
(Nsub 1))T:
Each ys,l
(i) is a reduced-dimension snapshot and ys,l
is a collection of such snapshots. We refer to ys,l
as
a super-snapshot. The form of ys,l
is illustrated
in the left part of Fig. 4. The right part of thisfigure illustrates how the super-snapshot y
s,lcan be
constructed from the preprocessed input snapshot ypre,lvia the equation
ys,l
= S ypre,l
(12)
whereS = T
sS
b: (13)
By construction, Ts
(but not Sb
) is a block-diagonalmatrix. Its blocks are generally rectangular.
When any particular subselection ys,l
(i) just
needs to be a simple selection of the appropriatebins of y
pre,l, T
s(i) needs simply be set to the
appropriately-sized identity matrix I. However,one of the main reason for providing an additional
transformation Ts(i) is to perform an FT-likeoperation, either 1D or 2D. Of course, the use ofFT-like operations in T
preand T
smust be coordinated.
Indeed, there is generally no point in having an FT inboth T
preand T
s. In practice, all subselections y
s,l(i)
are processed similarly and the Ts(i)s are, in fact,
generally identical.The methods of Ward [2] are SOMs
performing simple subselections, i.e., with Ts
= I.
Filter-then-adapt (F$A) described by Brennan [38]
Fig. 4. Graphical representation of input-output relation of
subselector of Step 2. Structures of various vectors and matrices
involved are shown.
is an SOM with rectangular matrices Ts(i) that
implement 1D FT-like operations.The stacking of the y
s,l(i)s is performed only for
mathematical convenience. It allows us to expressthe complex input-output relation of the currentprocessing step compactly via (12),
ys,l
= S ypre,l
: (14)
Given that Ts
is block-diagonal, it is clear that Sis also block-diagonal. Its blocks are generallyrectangular.
Step 3 Filtering processor, implemented by matrixW
lThe input to the filtering processor is the
super-snapshot ys,l
. The output is yw,l
= Wly
s,l.
Each subselection ys,l
(i) is processed
independently. Note that, even though the weightimplemented in the standard OP is a vector, weprovide here for a matrix W
s,l
(i). Indeed, at a
minimum, we want to process several targethypotheses simultaneously. So, for each y
s,l(i), i =
0, :::,Nsub 1, we compute an optimum weight matrixW
s,l(i) where each column is related to a particular
target hypothesis. This could, of course, be done forthe OP as well. Note that the subselections y
s,l(i) do
not necessarily correspond to the space-time domain,as is the case in the standard OP. Therefore, eachoptimum weight matrix W
s,l(i) must be computed
by taking into account the nature of the domain,possibly mixed, of y
s,l(i). It is even possible for each
subselection to have its own particular domain, e.g.,
space-time, space-Doppler, and angle-Doppler.The output corresponding to each subselection
ys,l
(i) is
yw,l
(i) = Ws,l
(i)ys,l
(i)
where the optimum weight matrix is given by
Ws,l
(i) = R1T,s,l
(i)VT,s,l
(s(i),d(i)):
The subscript T is added as an indication thatthe weights must be computed in the domain
CORRESPONDENCE 1091
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
9/16
Fig. 5. Graphical representation of input-output relation of
filtering processor of Step 3. Structures of various vectors and
matrices involved are shown.
corresponding to the transformation T(i) = S(i)Tpre
, as
discussed below.The successive y
w,l(i)s, i = 0, :::,Nsub 1, are then
stacked into a single vector
yw,l
= (yTw,l
(0), : : : , yTw,l
(Nsub 1))T:
The construction of yw,l
is illustrated in Fig. 5. This
figure clearly indicates that yw,l
is related to the
super-snapshot ys,l by
yw,l
= Wly
s,l(15)
where Wl
is block-diagonal,
Wl
= diagfWs,l
(0), : : : ,Ws,l
(Nsub 1)g: (16)
We now provide more details regarding the individualweight matrices W
s,l(i). These can be of two types.
First, if we test for a single target hypothesis(s(i),d(i), l), Ws,l(i) becomes a weight vector as in
the OP,
ws,l(i) = R1
T,s,l
(i)vT,s,l(s(i),d(i)) (17)
where we have assumed that = 1 (see Section II)and
RT,s,l
(i) = T(i)RlT(i) (18)
andvT,s,l(s(i),d(i)) = T(i)v(s(i),d(i)) (19)
withT(i) = S(i)T
pre:
Second, if we test simultaneously N(i) targethypotheses, we must use a weight matrix,
Ws,l
(i) = R1T,s,l
(i)VT,s,l
(s(i),d(i)) (20)
with
VT,s,l
(s(i),d(i))
= (vT,s,l(0s (i),
0d(i)) vT,s,l(
N(i)1s (i),
N(i)1d (i)))
where (ns (i),nd (i)) is the nth tested frequency
pair among the N(i) frequency pairs that aresimultaneously tested for subselection i.
The stacking of the yw,l
(i)s and the consideration
of several simultaneous target hypotheses are
performed only for mathematical convenience. Thisallows us to express the complex input-output relationof the current processing step compactly via (15)
yw,l
= Wly
s,l(21)
where, again, Wl
is block-diagonal, as illustrated inFig. 5.
Step 4 Postprocessor, implemented by matrix Tpost
The input to the postprocessor is yw,l
. The output
is ypost,l
= Tpost
yw,l
.
Further processing can be applied to the yw,l
(i)s,
as in the pre-Doppler methods of [2], where aDFT is applied to the y
w,l(i)s. The postprocessing
of the output yw,l
is performed by Tpost
. The role
and structure of Tpost
are similar to those of Tpre
.
However, since the inversion of the covariance matrixperformed in Step 3 is no longer an issue, there is noneed for further dimensionality reduction. In otherwords, T
postdoes not need to include a projection
operation. As a result, we write
Tpost = Fpost
where Fpost
is similar to Fpre
.
Step 5 Thresholding, implemented by D
f:gThe input to the thresholding operator is y
post,l.
The output is Zb,l = Dfypost,lg. This is also the final
output providing the binary detection results.Each scalar element of y
post,lis compared with
some threshold (j), further discussed below, toestablish the presence or the absence of a target forthe tested hypotheses (ns (i),
nd (i), l) corresponding
to each of the N(i) tested frequency pairs of each of
the Nsub subselections. Two possible strategies canbe envisioned. We can use either a single threshold(j) for all scalar elements ypost,l(j) of ypost,l, or, more
generally, distinct (j)s for each ypost,l(j). Focusingon the general case, the jth element of Zb,l is set to0 (target absent) if ypost,l(j) is below (j) and to 1(target present) ifypost,l(j) is above (j). In case ofequality, one can proceed as recommended in [39].
Since the main goal of this paper is to providea way of understanding the structure of SOMs,discussing the details of the detection operator D
f:g
is beyond the scope of this paper. The choice of thisoperator depends on the operational requirementsof the radar system. Detection leading to a constantfalse alarm rate (CFAR) is often highly desirablein operational settings, but the design of a robustCFAR detector remains a challenge [3, sect. 6:2].Papers such as [40], [41], [42], [43], [44] discuss theproblem of maximizing Pd while keeping Pfa constant.In particular, [43] discusses CFAR detection for STAP.The difficulty lies in achieving a Pfa independent of thelevel and structure of the I+N CM. More informationon the structure of CFAR algorithms is found in [45].
1092 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
10/16
Fig. 6. Canonical framework for suboptimum STAP methods, including the estimation of the I+N CM and compensation for range
dependence. Detailed views of gray part are shown in Figs. 7 and 8.
VI. DEALING WITH UNKNOWN STATISTICS
So far, we have assumed that the I+N CM is
known. Here, we consider the case where it is not
known. Therefore, we must adapt the frameworkto allow for the estimation of the R
T,s,l(i)s. This
estimation can be done by applying to the datacube
a sequence of five new processing steps, related
to some of those described in Section V. The final
outputs are the estimated transformed CMs RT,s,l
(i),
i = 0, : : : ,Nsub 1. Throughout, we use subscript lto refer to the range of interest and subscript k torefer to the ranges associated with the surrounding
snapshots used for the estimation of the I+N CM
at l. The prime 0 continues to indicate expressionsrelated to an intermediate computation domain; the
carat indicates estimates obtained by straightaveraging. The following description of the five
new processing steps used for estimation is rather
concise since these new steps are closely related to
the first three processing steps used for detection in
Section V. The discussion can be followed on Fig. 6,
which illustrates the general architecture of the novel
canonical framework. While its left part shows the
same sequence of detection steps as in Fig. 3, its
right part shows the estimation steps. At the top,
we see the Ns Nt Nr datacube ofNr snapshots,each of size Ns Nt. The diagram shows clearlythe operations that are in correspondence, i.e., 1)T
preand T0
pre, 2) S
band S
b(i), and 3) T
sand T0
s(i).
Operations T0pre, Sb(i), and T0s(i) are described below.This conceptual correspondence stops at T
s, i.e., prior
to the calculation of the weight in the main path ofcalculation applied to each y
l. Notice the presence of
the matrices Tkl
, which is described in Section VII.The jth element of Zb,l is set to 0 (target absent)ifypost,l(j) is below (j) and to 1 (target present) ifypost,l(j) is above (j). In case of equality, one canproceed as recommended in [39].
Step 1 The output is y0pre,k
= T0pre
yk
for all k2
Sl. T0pre
is a preprocessor like Tpre
. However, T0pre
could be different from Tpre
. This is the case in the
frequency-domain space-time FIR filter (FDFF)[1]. But, the output must be transformed again viamatrices T
R,pre(i) for compatibility with the filtering
domain, as explained further below in Step 4. In otherwords, transformations on snapshots during estimationof the I+N CM can be separated into two steps (T0
pre
and TR,pre
(i)) according to the needs, as long as all
inputs to the filtering processor (where the weights areapplied) are expressed exactly in the same, appropriatedomain.
CORRESPONDENCE 1093
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
11/16
Fig. 7. Detailed view of gray part of Fig. 6 for situations where the I+N CMs are estimated using (22), as is generally the case.
Fig. 8. Detailed view of gray part of Fig. 6 for situations where I+N CMs are estimated using (23), as in the case of sub-CPI
smoothing.
Step 2 The output is y0s,k
= S0y0pre,k
for all k2 Sl.
We set S0 = T0sS
b, where S
bis the selection matrix of
Section V. We allow for the possibility of T0s being
different from Ts, even though we are not aware of
any method for which T0s
is different from Ts. (Once
again, such observations could lead to new SOMs.)The output corresponding to the ith subselection isdenoted by y0
s,k(i) for range k.
Step 3 The outputs are the I+N CM estimates
R0
s,l(i). The computation of these matrices can involve
an NHD [34] to determine which auxiliary snapshotsare homogeneous and, thus, which auxiliary snapshots
to integrate in the final estimation of R0
s,l(i). The use
of NHD generally involves an iterative computation
of the estimates of R0s,l
(i) [34]. These iterations
are symbolized in Fig. 6 by double-headed arrowsbetween the NHD block and the I+N CM estimationblock.
The most common approach for computing the
estimate R0
s,l(i) (for the ith subselection) is the straight
averaging
R0
s,l(i) =
1
Nl
Xk2Sl
R0
s,k(i) (22)
where the R0
s,k(i) = y0
s,k(i)y0
s,k(i) are the single-sample
I+N CMs. The estimate R0
s,l(i) for the ith subselection
is obtained by averaging the single-sample CMs R0
s,k(i)for all k2 Sl. This is repeated for each subselection(i = 0, : : : ,Nsub 1). This approach is illustrated inFig. 7.
However, the above approach is not the onlyone that can be used. In methods such as sub-CPIsmoothing [46], the estimation of R
0
s,lis based on
several sub-CPI obtained from each auxiliary snapshot
R0
s,l=
1
Nl
Xk2Sl
0@ 1Nsub
Xi=0,:::,Nsub1
R0
s,k(i)
1A : (23)
This approach is illustrated in Fig. 8.In methods such as PAMF, the coefficients of an
AR filter of order p are used to derive the R0
s,l(i)s.
Furthermore, the above average of the sample I+NCMs becomes a weighted sum in the case wherethe model used for clutter is a spherically invariantrandom process (SIRP) as described in [47]. (Thisapproach is not illustrated here.)
Step 4 The output is
RT,s,l
(i) = TR,pre
(i)R0
s,l(i)T
R,pre(i)
1094 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
12/16
where TR,pre
(i) is a transformation that ensures that
RT,s,l
(i) and ys,l
(i) are expressed in the same domain.
Step 5 The outputs are the matrices Ws,l
(i) of
Step 3 of Section V. Equation (20) is used for thecalculation of W
s,l(i) after replacement of R
T,s,l(i) by
its estimate RT,s,l
(i).
The estimate for the ith subselection of the I+NCM at l can thus be expressed as
RT,s,l
(i) = TR,pre
(i)
24 1Nl
Xk2Sl
y0s,k
(i)y0s,k
(i)
35T
R,pre(i)
wherey0s,k
(i) = S0(i)T0pre
yk:
The matrices T0pre
, T0s, and T
R,preused to produce
RT,s,l
(i) in any particular method are designed to
increase performance and to minimize computationalcost. The only constraint is that the domains of
RT,s,l
(i) and ys,l
(i) be identical. Any domain mismatch
will obviously lead to an incorrect computation ofWs,l
(i) using (20).
VII. DEALING WITH RANGE DEPENDENCE
In most radar configurations, the statistics of thesnapshots y
kare nonstationary with respect to range.
Compensation for this range dependence (RD) can beachieved by applying to each y
ka transformation that
depends upon k.Existing RD compensation methods are Doppler
warping (DW) [48], angle-Doppler compensation(ADC) [49], adaptive angle-Doppler compensation(AADC) [50], derivative-based updating (DBU)
[51], and the registration-based compensation (RBC)methods [18].
In [18], it was shown that each of these methodscan be represented by an RD transformation T
klapplied to each snapshot before computing theweight vector or performing subselections. As aconsequence, to allow for the use of RDC methods inour framework, we need to add an RD transformationT
klto T0
pre, so that the RD transformation T0
pre
becomesT0
preT
kl:
If we include the capability of RDC, the y0s,k
(i)s
used to compute Rs,l become
y0s,k
(i) = S0(i)T0pre
Tkl
yk
where the Tkl
matrices are discussed in detail in [18].
VIII. EXISTING AND NEW TAXONOMIES OFSUBOPTIMUM METHODS
References [2], [4], and [3] propose taxonomiesfor classifying SOMs. However, none of these
classifications is complete enough to represent thediversity of SOMs that can be represented within thenew canonical framework.
As mentioned in Section III, Ward [2] introduceda useful taxonomy, where he classified SOMs aspre-Doppler versus post-Doppler and as element-spaceversus beamspace. However, this classification isnot general enough to include the more exotic casescharacterized by the presence of both T
preand T
post
in the new framework. We can indeed imagine amethod where Tpre
and Tpost
both implement temporal
DFTs. These methods could not be characterizedin terms of Wards taxonomy. Furthermore, PC [9]cannot be characterized by Wards taxonomy sincethis method involves a projection on I+N eigenvectorsas explained in Section III.
Melvin [4] distinguishes between reduced-rankSOMs and reduced-dimension SOMs. Reduced-rankmethods reduce the size of the snapshot byprojecting it on a subspace constructed usingselected eigenvectors of the I+N CM as in PC [9]and CSM [10]. Melvin also distinguishes between
signal-dependent SOMs and signal-independentSOMs. Signal-dependent methods have a T
preand/or
an S that depend on the target hypothesis (s,d, l),generally via the use of v(s,d) in the computation ofthese matrices.
Guerci [3] distinguishes between data-dependentSOMs and data-independent SOMs. In data-dependentSOMs, the filtering blocks (T
preand S) are built
according to the knowledge of the I+N CM. Thisdistinction is more general than the reduced-rankversus reduced-dimension distinction proposedby Melvin. Combining the data-dependence andsignal-dependence dichotomies creates the 4-way
classification shown in Fig. 9.CSM [10] is thus a data-dependent and
signal-dependent SOM, since it involves the I+NCM and the target hypothesis in the computation ofT
pre. PC [9] is also data dependent, but it is signal
independent since the use of the eigenvectors toconstruct T
preis not related to the target hypothesis,
contrary to what is done in CSM. The element-spaceand beamspace SOMs of Ward [2] are all dataindependent since T
preconsists of fixed Fourier
coefficients and S is fixed.It also appears useful to be able to characterize
SOMs as performing subselections (as in JDL-GLR[6]) or not, and as testing multiple hypotheses(as in SAS [1]) or not. Then, the beamspacepost-Doppler SOM of Ward [2] can be classified assignal-independent, data-independent, performingsubselections, and testing single hypotheses.
The above considerations lead to a generalclassification structure primarily based on thefour combinations of Fig. 9, i.e., on the notion ofdata-(in)dependence and signal-(in)dependence, butaugmented by whether or not the method performs
CORRESPONDENCE 1095
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
13/16
Fig. 9. A taxonomy for SOMs. Each of the four main flavors
shown can be optionally characterized as performing
subselections and/or testing multiple hypotheses.
subselections and whether or not it tests multiplehypotheses. The important point is that all SOMsthat can be represented within the new framework
of Section V can be classified in terms of this newtaxonomy.
IX. USE OF THE NEW FRAMEWORK
The framework should be useful to radar systemdesigners interested in creating methods tailored tospecific operational settings. The framework indeedprovides a clear sequence of building blocks that arewell defined and quite general. As an illustration, theuse of a directional antenna will motivate the designerto use a T
prematrix that converts signals to beams
steered in the look directions and to do the processingonly for these directions. The availability of parallelprocessors will naturally lead to the considerationof subselections. Subselections can fit a hardwareenvironment where processors are organized to treatsmall (or partial) snapshots independently and inparallel. Problems with clutter snapshots that are notIID with respect to range will naturally lead to theconsideration of RDC, which is provided for in theframework. In our case, the framework inspired us to
improve an SOM that combines JDL-GLR with anRDC algorithm with reduced sample support [52].
X. CONCLUSION
We have designed a framework that encompassesall suboptimum STAP methods we are aware of.It applies to all monostatic and bistatic STAPconfigurations. The framework includes the
mechanisms required for estimating the requiredcovariance matrices and for applying existing RDCmethods. The framework also appears to be flexibleenough to accommodate many of the new methodsof suboptimum processing, CM estimation, and RDCthat might be proposed in the future. Furthermore,the modularity of the framework should make itpossible to modify it or to extend it, if this becamenecessary. We also proposed a new taxonomy forSOMs which makes it possible to unambiguouslyclassify all SOMs that can be described by the newcanonical framework. Finally, we have provided a firstlook at how the framework could assist a radar system
designer in creating new SOMs for STAP.The paper focused on finding a common languagefor understanding the structure of existing SOMsand for envisioning new SOMs. Specifically, wehave not addressed the issues of computationalcost and performance. These may or may not berelevant. For example, we could envision proposinga new architecture using the framework, but thenimplementing it in a way that would be more efficient,but, perhaps, more obscure.
APPENDIX. FOURIER-TRANSFORM-LIKEOPERATIONS COMMONLY USED IN STAP
Consider the snapshot matrix yl
illustrated in
Fig. 1. Its nsth row rTs corresponds to the Nt echoes
corresponding to the Nt transmitted pulses andreceived on the nsth antenna element. Similarly, itsntth column ct corresponds to the Ns echoes receivedat the Ns antenna elements and corresponding to thentth transmitted pulse. Each row can be viewed as afinite-support 1D time sequence and each column asa finite-support 1D space sequence; matrix y
lcan be
viewed as a finite-support 2D space-time sequence.Assuming we want to compute the 2D
(space-time) discrete FT (DFT) of yl
, we can first
compute the 1D temporal DFT of each row and thenthe 1D spatial DFT of each column of the resultingmatrix. Recall that the 1D DFT X of some vector x issimply given by X = F x, where F is the appropriateFourier-coefficients matrix [53]. The DFT X of xis typically defined to have the same length as x[53]. However, some STAP methods use DFT-liketransforms for which the input x and the output X donot have the same lengths. Therefore, DFT shouldbe understood here as meaning the discrete-time
1096 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
14/16
FT (DTFT) [54] sampled at an arbitrary number offrequencies. F can thus be rectangular. Here, we callsuch a transform an FT-like operation.
Since F has a different form for space and time,
we use Fs and Ft in these respective cases. Therefore,the FT-like matrix Y
lof the matrix y
lis given by
Yl
= (Fs)yl(Ft)T (24)
where (F
t
)T
applies a temporal DFT to each row ofy
land Fs applies a spatial DFT to each column of the
resulting matrix. To deal with the vector forms yl
andY l of matrices y
land Y
l, respectively, one can easily
show that (24) must be replaced by
Yl = Fyl (25)
withF = Fs
cFt
r(26)
where
Fs
c
=
0
BB@Fs 0
. .
.0 Fs
1
CCA= IFs (27)
computes the spatial DFTs of the columns and
Ftr
=
0BB@
Ft(1,1) I Ft(1,Nt) I
. . .
Ft(Nt,1) I Ft(Nt,Nt) I
1CCA= Ft I
(28)computes the temporal DFTs of the rows.
Using the well-known relation [19]
(AB)(CD) = A CB D (29)
for the Kronecker product, we have
F = FtFs: (30)
As indicated earlier, the FT-based operation usedin STAP is either the true DFT (where the input andoutput vectors have the same lengths) or an FT-likeversion (where the lengths are different). Furthermore,while some methods use the FT-like operation in2D, others use it only in 1D, either only along thespace dimension or only along the time dimension.The transform can also be applied to only a subsetof (possibly noncontiguous) elements of y
l. More
generally, in some cases, the operation performed mayeven result in an F that is not separable into space andtime components, as is the case in (30).
SEBASTIEN DE GREVE
PHILIPPE RIES
Dept. of Electrical Engineering and Computer Science
University of Liege
Sart-Tilman, Bldg. B28
B-4000 Liege
Belgium
E-mail: ([email protected], [email protected])
FABIAN D. LAPIERRE
Dept. of Electrical Engineering
Royal Military Academy
Avenue de la Renaissance 30
B-1000 Brussels
Belgium
JACQUES G. VERLY
Dept. of Electrical Engineering and Computer Science
University of Liege
Sart-Tilman, Bldg. B28
B-4000 Liege
Belgium
REFERENCES
[1] Klemm, R.
Principles of Space-Time Adaptive Processing.
IEE Radar, Sonar, Navigation and Avionics 9, 2002.
[2] Ward, J.
Space-time adaptive processing for airborne radar.
MIT Lincoln Laboratory, Lexington, MA, Technical
Report 1015, 1994.
[3] Guerci, J. R.
Space-Time Adaptive Processing for Radar.
Norwood, MA: Artech House, 2003.[4] Melvin, W. L.
A STAP overview.
IEEE Aerospace and Electronic Systems Magazine, Part 2:
Tutorials, 19, 1 (Jan. 2004), 1935.
[5] Reed, I. S., Mallett, J. D., and Brennan, L. E.
Rapid convergence rate in adaptive arrays.
IEEE Transactions on Aerospace and Electronic Systems,
AES-10, 6 (1974), 853863.
[6] Wang, H., and Cai, L.
On adaptive spatial-temporal processing for airborne
surveillance radar systems.
IEEE Transactions on Aerospace and Electronic Systems,
30, 3 (1994), 660669.
[7] Goldstein, J. S., Reed, I. S., and Zulch, P. A.
Multistage partially adaptive STAP CFAR detection
algorithm.
IEEE Transactions on Aerospace and Electronic Systems,
35, 2 (1999), 645661.
[8] Haimovich, A. M., and Bar-Ness, Y.
An eigenanalysis interference canceler.
IEEE Transactions on Signal Processing, 30, 1 (1991),
7684.
[9] Kirsteins, L. P., and Tufts, D. W.
Adaptive detection using low rank approximation to a
data matrix.
IEEE Transactions on Aerospace and Electronic Systems,
30 (1994), 5567.
[10] Goldstein, J. S., and Reed, I. S.
Theory of partially adaptive radar.IEEE Transactions on Aerospace and Electronic Systems,
33, 4 (1997), 13091325.
[11] Peckham, C. D., Haimovich, A. M., Ayoub, T. F., Goldstein,
J. S., and Reed, I. S.
Reduced-rank STAP performance analysis.IEEE Transactions on Aerospace and Electronic Systems,
36, 2 (2000), 664676.
[12] Lin, X., and Blum, R. S.
Robust STAP algorithms using prior knowledge for
airborne radar applications.
Signal ProcessingElsevier, 79 (1999), 273287.
CORRESPONDENCE 1097
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
15/16
[13] Rangaswamy, M.
A unified framework for space-time adaptive processing.
In Proceedings of the Ninth IEEE SP Workshop on
Statistical Signal and Array Processing, Portland, OR,
Sept. 1416, 1998, 360363.
[14] Guerci, J. R., Goldstein, J. S., and Reed, I. S.
Optimal adaptive reduced-rank STAP.
IEEE Transactions on Aerospace and Electronic Systems,
36, 2 (2000), 647663.
[ 15] De Greve, S., Lapierre, F. D., and Verly, J. G.
Canonical framework for describing suboptimum radar
space-time adaptive processing (STAP) techniques.In Proceedings of 2004 IEEE Radar Conference,
Philadelphia, PA, Apr. 2629, 2004, 474479.
[ 16] De Greve, S., Lapierre, F. D., and Verly, J. G.
A canonical framework for suboptimum space-time
adaptive processing (STAP) including covariance-matrix
estimation and range dependence compensation.
In Proceedings of Radar 2004 International Conference,
Toulouse, France, Oct. 1822, 2004.
[17] Lapierre, F. D., and Verly, J. G.
Registration-based solutions to the range-dependence
problem in STAP radars.
In Proceedings of the Adaptive Sensor Array Processing
(ASAP) Workshop, MIT Lincoln Laboratory, Lexington,
MA, Mar. 1113, 2003.
[18] Lapierre, F. D.Registration-based range-dependence compensation in
airborne bistatic radar STAP.
Ph.D. dissertation, University of Liege, Liege, Belgium,
Nov. 2004.
[19] Graham, A.Kronecker Products and Matrix Calculus With Applications.
West Sussex, UK: Horwood Publishing Ltd., 1981.
[20] Klemm, R.
Space-Time Adaptive Processing: Principles and
Applications.
IEE Radar, Sonar, Navigation and Avionics 9, 2000.
[21] Rangaswamy, M., Michels, J. H., and Himed, B.
Statistical analysis of the nonhomogeneity detector for
non-Gaussian interference backgrounds.
In Proceedings of the IEEE Radar Conference, LongBeach, CA, Apr. 2225, 2002, 304310.
[22] Skolnik, I. M.
Radar Handbook(2nd ed.).
New York: McGraw-Hill, 1990.
[23] Richardson, P. G.
Relationship between DPCA and adaptive space time
processing techniques for clutter suppression.
In Proceedings of the International Conference on Radar,
Paris, May 36, 1994, 295300.
[24] Blum, R. S., Melvin, W. L., and Wicks, M. C.
An analysis of adaptive DPCA.
In Proceedings of the IEEE National Radar Conference,
Ann Arbor, MI, May 1316, 1996, 303308.
[25] Gabriel, W. F.
Adaptive digital processing investigation of the DFTsubbanding vs transversal filter canceler.
Naval Research Laboratory, Report 8981, 1986.
[26] DiPietro, R. C.
Extended factored space-time processing for airborne
radar systems.
In Conference Record of The Twenty-Six Asilomar
Conference on Signals, Systems and Computers, vol. 1,
Pacific Grove, CA, Oct. 2628, 1992, 425430.
[27] Bao, Z., Liao, G., Wu, R., Zhang, Y., and Wang, Y.
Adaptive spatial-temporal processing for airborne radars.
Chinese Journal of Electronics, 2, 1 (1993), 27.
[28] Bao, Z., Wu, S., Liao, G., and Xu, Z.
Review of reduced rank space-time adaptive processing
for airborne radars.
In Proceedings of the International Conference on Radar
(ICR), Beijing, China, Oct. 810, 1996, 766769.
[29] Brown, R. D., Schneible, R. A., Wicks, M. X., Wang, H.,
and Zhang, Y.
STAP for clutter suppression with sum and difference
beams.
IEEE Transactions on Aerospace and Electronic Systems,
36, 2 (2000), 634646.
[30] Wang, H., Zhang, Y., and Zhang, Q.An improved and affordable space-time adaptive
processing approach.
In Proceedings of the International Conference on Radar
(ICR), Beijing, China, Oct. 810, 1996, 7277.
[31] Goldstein, J. S., Reed, I. S., and Scharf, L. L.
A multistage representation of the Wiener filter based on
orthogonal projections.
IEEE Transactions on Information Theory, 44, 7 (1998),
29432959.
[32] Roman, J. R., Rangaswamy, M., Davis, D. W., Zhang, Q.,
Himed, B., and Michels, J. H.
Parametric adaptive matched filter for airborne radar
applications.
IEEE Transactions on Aerospace and Electronic Systems,
36, 2 (2000), 677692.[33] Michels, J. H., Roman, J. R., and Himed, B.
Beam control using the parametric adaptive matched filter
STAP approach.
In Proceedings of the IEEE National Radar Conference,
Huntsville, AL, May 58, 2003, 405412.
[34] Melvin, W. L., and Wicks, M. C.
Improving practical space-time adaptive radar.
In Proceedings of the IEEE Radar Conference, Syracuse,
NY, May 1315, 1997, 4853.
[35] des Rosiers, A. P., Schoenig, G. N., and Mili, L.
Robust space-time adaptive processing using projection
statistics.
In Proceedings of the Radar 2004 International
Conference, Toulouse, France, Oct. 1822, 2004.
[36] Wicks, M. C., Rangaswamy, M., Adve, R., and Hale, T. B.Space-time adaptive processing.
IEEE Signal Processing Magazine, 23, 1 (2006), 5165.
[37] Compton, R. T.
The relationship between tapped delay-line and fft
processing in adaptive arrays.
IEEE Transactions on Antennas Propagation, 36, 1 (1988),
1526.
[38] Brennan, L. E., Piwinsky, D. J., and Staudaher, F. M.
Comparison of space-time adaptive processing approaches
using experimental airborne radar data.
In Proceedings of the IEEE National Radar Conference,
Boston, MA, Apr. 1993, 176181.
[39] McDonough, R. N., and Whalen, A. D.
Detection of Signals i n Noise.
New York: Academic Press, 1995.[40] Kelly, E. J.
An adaptive detection algorithm.
IEEE Transactions on Aerospace and Electronic Systems,
22, 1 (Mar. 1986).
[41] Chen, W.-S., and Reed, I. S.
A new CFAR detection test for radar.
Digital Signal Processing, 1, 4 (Oct. 1991).
[42] Robey, F. C., Fuhrmann, D. R., Kelly, E. J., and Nitzberg, R.
A CFAR adaptive matched filter.
IEEE Transactions on Aerospace and Electronic Systems,
28, 1 (Jan. 1992).
1098 IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 43, NO. 3 JULY 2007
Authorized licensed use limited to: Jacques Verly. Downloaded on October 28, 2008 at 06:32 from IEEE Xplore. Restrictions apply.
7/29/2019 DeGreve AES 2007
16/16
[43] Reed, I. S., Gau, Y. L., and Truong, T. K.
CFAR detection and estimation for STAP radar.
IEEE Transactions on Aerospace and Electronic Systems,
34, 3 (July 1998).
[44] Kraut, S., and Scharf, L. L.
The CFAR adaptive subspace detector is a scale-invariant
GLRT.
IEEE Transactions on Signal Processing, 47, 9 (1999),
25382541.
[45] Nitzberg, R.
Adaptive Signal Processing for Radar.
Norwood, MA: Artech House, 1992.[46] Schuman, H. K., and Li, P.
Space-time adpative processing STAP for low sample
support applications.
Air Force Research Laboratory, Rome, NY, Technical
Report AFRL-SN-RS-TR-2004-124, May 2004.
[47] Rangaswamy, M.
Parametric and model based adaptive detection algorithms
for non-Gaussian interference backgrounds.
Air Force Research Laboratory, Rome, NY, Technical
Report AFRL-SN-RS-TR-1999-185, Aug. 1999.
[48] Borsari, G. K.
Mitigating effects on STAP processing caused by an
inclined array.
In Proceedings of the IEEE National Radar Conference,
Dallas, TX, May 1213, 1998, 135140.[49] Himed, B., Zhang, Y., and Hajjari, A.
STAP with angle-doppler compensation for bistatic
airborne radars.
In Procedings of the IEEE National Radar Conference,
Long Beach, CA, May 2002, 311317.
[50] Melvin, W. L., Himed, B., and Davis, M. E.
Doubly adaptive bistatic clutter filtering.
In Proceedings of the IEEE National Radar Conference,
Hunstville, AL, May 58, 2003, 171178.
[51] Kogon, S. M., and Zatman, M. A.
Bistatic STAP for airborne radar systems.
In Proceedings of the ASAP Conference, MIT Lincoln
Laboratory, Lexington, MA, Mar. 1314, 2000.
[52] Ries, P., De Greve, S., Lapierre, F. D., and Verly, J. G.
Design of a new adaptive heterogeniety-compensation
algorithm based on the JDL algorithm and applied tobistatic radar STAP using conformal arrays.
In Proceedings of the 2006 International Radar Symposium
(IRS), Krakau, Poland, May 2426, 2006, submitted for
review.
[53] Oppenheim, A. V., and Willsky, A. S.
Signals and Systems.
Englewood Cliffs, NJ: Prentice-Hall, 1997.
[54] Oppenheim, A. V., and Schafer, R. W.Discrete-Time Signal Processing.
Englewood Cliffs, NJ: Prentice-Hall, 1999.
Application of the Kalman-Levy Filter for TrackingManeuvering Targets
Among target tracking algorithms using Kalman filtering-like
approaches, the standard assumptions are Gaussian process and
measurement noise models. Based on these assumptions, the
Kalman filter is widely used in single or multiple filter versions(e.g., in an interacting multiple model (IMM) estimator). The
oversimplification resulting from the above assumptions can cause
degradation in tracking performance. In this paper we explore
the application of Kalman-Levy filter to handle maneuvering
targets. This filter assumes a heavy-tailed noise distribution
known as the Levy distribution. Due to the heavy-tailed nature of
the assumed distribution, the Kalman-Levy filter is more effective
in the presence of large errors that can occur, for example,
due to the onset of acceleration or deceleration. However, for
the same reason, the performance of the Kalman-Levy filter
in the nonmaneuvering portion of track is worse than that of
a Kalman filter. For this reason, an IMM with one Kalman
and one Kalman-Levy module is developed here. Also, the
superiority of the IMM with Kalman-Levy module over only
Kalman-filter-based IMM for realistic maneuvers is shown by
simulation results.
I. INTRODUCTION
In the target tracking literature [1, 3] themeasurement and process noise sequences areconsidered to be Gaussian. This, in addition to the
linearity assumption of update and measurementprocesses, simplifies the tracker to the form of thewell-known Kalman filter. However, the Gaussianassumption on the process noise is a questionableapproximation as an aircrafts motion can bedescribed by the combination of small perturbationsdue to air turbulence and occasional pilot-inducedchange to the speed and course. In addition, mostmeasurement processes related to target trackingcan generate outlying measurements which affectadversely any tracker that assumes Gaussianmeasurement noise.
To track different phases of aircraft motion, a
number of multiple-model algorithms have been
Manuscript received October 19, 2004; revised December 15, 2005
and December 19, 2006; released for publication February 28,
2007.
IEEE Log No. T-AES/43/3/908414.
Refereeing of this contribution was handled by D. J. Salmond.
0018-9251/07/$25.00 c 2007 IEEE
CORRESPONDENCE 1099