12
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error Exponents for the Detection of Gauss–Markov Signals Using Randomly Spaced Sensors Saswat Misra, Member, IEEE, and Lang Tong, Fellow, IEEE Abstract—We derive the Neyman–Pearson error exponent for the detection of Gauss–Markov signals using randomly spaced sensors. We assume that the sensor spacings, are drawn independently from a common density , and we treat both stationary and nonstationary Markov models. Error exponents are evaluated using specialized forms of the Strong Law of Large Numbers, and are seen to take on algebraically simple forms involving the parameters of the Markov processes and expectations over of certain functions of . These expressions are evaluated explicitly when corresponds to i) exponentially distributed sensors with placement density ; ii) equally spaced sensors; and iii) the proceeding cases when sensors fail (or equivalently, are asleep) with probability . Many insights follow. For example, we determine the optimal as a function of in the nonstationary case. Numerical simulations show that the error exponent predicts trends of the simulated error rate accurately even for small data sizes. Index Terms—Error exponent, Gauss–Markov, Neyman– Pearson detection, optimal placement density, sensors. I. INTRODUCTION W E study the detection of a correlated signal field by a set of sensors and a fusion center (FC), as depicted in Fig. 1. We assume that the signal is present under both hy- potheses, , and that it has a Gauss–Markov corre- lation structure and power level that are hypothesis-dependent. The signal field is sampled by sensors, and these samples are collected by the FC. The FC then makes a global decision as to the true hypothesis using Neyman–Pearson (NP) hypoth- esis testing [21]. We assume that the sensors are randomly lo- cated along a straight line. The assumption that sensors are on a straight line models scenarios such as when the fusion center is a mobile collection agent (e.g., a unmanned rover) that tra- verses the network to collect data, and/or when the sensor net- work is in the far field of the signal source [19]. The assumption of randomness models the fact that sensors are often deployed without precise control (e.g., they are air dropped in military applications). Even if sensors are deterministically spaced upon deployment, mechanical failures and/or sleep cycles introduce randomness into the spacing between the operational sensors. Manuscript received March 20, 2007; revised January 7, 2008. The associate editor coordinating the review of this manuscript and approving it for publica- tion was Dr. Cedric Richard. S. Misra is with the Army Research Laboratory, Adelphi, MD 20783 USA (e-mail: [email protected]). L. Tong is with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSP.2008.919106 Fig. 1. The collection of signal samples from randomly placed sensors. denotes the collected samples, and denotes the (random) sensor spacings. The data and are collected by a FC, which makes a global decision as to the true hypothesis using the Neyman–Pearson framework. We study the theoretical detection performance once the sam- ples arrive at the FC. 1 As an example in which this model is relevant, consider sen- sors deployed ad-hoc in a hostile environment and tasked with classifying a passing tank as either friendly (hypothesis ) or enemy , based on the acoustic wavefront that the tank produces. This wavefront is a signal field that can be sampled by acoustic sensors. The power and correlation structure of these samples would depend on the tank’s class as well as the (random) locations of sensors. A. Background Consider a general binary hypothesis test between and . Let be a vector of observed data, and assume that the NP framework is used. Let and denote the probability of false alarm and probability of miss as functions of , respectively. The NP error exponent is defined, for a fixed constraint , as the expo- nential rate of decay in as the number of data samples approaches infinity, i.e., (1) provided that the limit exists. is a useful metric. It provides an estimate on the number of observations needed to attain a given level of detection performance, and is often parameterized by physical and design parameters [e.g., the signal-to-noise ratio (SNR) and sensor spacing] that can be optimized to improve the 1 The communication protocols used to initiate the detection process and to deliver the samples to the FC are not considered in this work. 1053-587X/$25.00 © 2008 IEEE

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385

Error Exponents for the Detection of Gauss–MarkovSignals Using Randomly Spaced Sensors

Saswat Misra, Member, IEEE, and Lang Tong, Fellow, IEEE

Abstract—We derive the Neyman–Pearson error exponent forthe detection of Gauss–Markov signals using randomly spacedsensors. We assume that the sensor spacings, 1 2 . . . aredrawn independently from a common density ( ), and wetreat both stationary and nonstationary Markov models. Errorexponents are evaluated using specialized forms of the StrongLaw of Large Numbers, and are seen to take on algebraicallysimple forms involving the parameters of the Markov processesand expectations over ( ) of certain functions of 1. Theseexpressions are evaluated explicitly when ( ) corresponds toi) exponentially distributed sensors with placement density ; ii)equally spaced sensors; and iii) the proceeding cases when sensorsfail (or equivalently, are asleep) with probability . Many insightsfollow. For example, we determine the optimal as a functionof in the nonstationary case. Numerical simulations show thatthe error exponent predicts trends of the simulated error rateaccurately even for small data sizes.

Index Terms—Error exponent, Gauss–Markov, Neyman–Pearson detection, optimal placement density, sensors.

I. INTRODUCTION

W E study the detection of a correlated signal field by aset of sensors and a fusion center (FC), as depicted in

Fig. 1. We assume that the signal is present under both hy-potheses, , and that it has a Gauss–Markov corre-lation structure and power level that are hypothesis-dependent.The signal field is sampled by sensors, and these samplesare collected by the FC. The FC then makes a global decisionas to the true hypothesis using Neyman–Pearson (NP) hypoth-esis testing [21]. We assume that the sensors are randomly lo-cated along a straight line. The assumption that sensors are ona straight line models scenarios such as when the fusion centeris a mobile collection agent (e.g., a unmanned rover) that tra-verses the network to collect data, and/or when the sensor net-work is in the far field of the signal source [19]. The assumptionof randomness models the fact that sensors are often deployedwithout precise control (e.g., they are air dropped in militaryapplications). Even if sensors are deterministically spaced upondeployment, mechanical failures and/or sleep cycles introducerandomness into the spacing between the operational sensors.

Manuscript received March 20, 2007; revised January 7, 2008. The associateeditor coordinating the review of this manuscript and approving it for publica-tion was Dr. Cedric Richard.

S. Misra is with the Army Research Laboratory, Adelphi, MD 20783 USA(e-mail: [email protected]).

L. Tong is with the School of Electrical and Computer Engineering, CornellUniversity, Ithaca, NY 14853 USA (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSP.2008.919106

Fig. 1. The collection of N signal samples from randomly placed sensors.fs g denotes the collected samples, and fd g denotes the (random)sensor spacings. The data fs g and fd g are collected by a FC, which makes aglobal decision as to the true hypothesis using the Neyman–Pearson framework.

We study the theoretical detection performance once the sam-ples arrive at the FC.1

As an example in which this model is relevant, consider sen-sors deployed ad-hoc in a hostile environment and tasked withclassifying a passing tank as either friendly (hypothesis )or enemy , based on the acoustic wavefront that the tankproduces. This wavefront is a signal field that can be sampledby acoustic sensors. The power and correlation structure ofthese samples would depend on the tank’s class as well as the(random) locations of sensors.

A. Background

Consider a general binary hypothesis test between and. Let be a vector of observed

data, and assume that the NP framework is used. Let anddenote the probability of false alarm and probability of

miss as functions of , respectively. The NP error exponent isdefined, for a fixed constraint , as the expo-nential rate of decay in as the number of data samplesapproaches infinity, i.e.,

(1)

provided that the limit exists. is a useful metric. It provides anestimate on the number of observations needed to attain a givenlevel of detection performance, and is often parameterized byphysical and design parameters [e.g., the signal-to-noise ratio(SNR) and sensor spacing] that can be optimized to improve the

1The communication protocols used to initiate the detection process and todeliver the samples to the FC are not considered in this work.

1053-587X/$25.00 © 2008 IEEE

Page 2: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

3386 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008

detection performance. However, is currently in an implicitform that is not amenable to analysis.

Let denote the probability density of when is true.If the likelihood ratio test (LRT) is used at the FC, it can beshown through a generalization of Stein’s lemma that [27]

a.s. in (2)

provided that the limit exists, where the notation (a.s. in )means that the limit is to be taken in the almost sure sense under

. Note that (2) is independent of .

B. Related Work

For a general overview of multisensor detection, see [28]. Fora discussion of issues related to multisensor detection with cor-related sensor observations, see [1], [4], [7], [9], [16], and thereferences therein. This paper addresses the multisensor detec-tion of correlated signals using randomly placed sensors and theNP error exponent (1).

Most related to this paper are [24], [25], [3, pp. 138-139],[18], and [19], each of which considers correlated signals andevaluates the criterion (1) for deterministically spaced sensors.In [24], the authors derive the error exponent when the signalis a stationary Gauss–Markov signal under one hypothesis, andindependent and identically distributed (i.i.d.) noise under theother. In [25], this work is extended to analyze the optimal ar-rangement of sensors along a spatial line. The error exponent isfound using implicit solutions of certain matrix equations. Anextensive numerical analysis is used to characterize the optimalsensor spacing as a function of the SNR and field correlation.It is observed that either a uniform or a clustered approach tosensor spacing appears to be optimal, depending on the fieldcorrelation and SNR. When clustering is optimal, there appearsto be an optimal cluster size. In [3, pp. 138-139] and [18], errorexponents are derived assuming Gauss–Markov signals underboth hypotheses. In [19], we previously derived properties ofthe error exponent for Gauss-Markov signals under both hy-potheses, using a physical model which linked the correlationparameter to network design parameters.

In the Bayesian setup, error exponents based on the Chernoffinformation or the similar Bhattacharya bound can be foundin [5], [6], [15], and [23], among others. In [15], the Cher-noff and Bhattacharya bounds are used to determine the optimalnumber of sensors for communications to a FC under powerand bandwidth constraints. In [5] and [6], the error exponentis derived when sensors have dependent observations. The op-timal sensor density is studied when sensors are equally spaced,using both numerical techniques and closed-form expressions(depending on the assumptions). In [23], a routing scheme isdesigned based on the Chernoff information for a deterministicsensor placement.

However, none of these works provide insights for the casewhen sensors are randomly located, motivating the need for theanalysis presented in this paper.

C. Organization and Main Results

In Sections II and III, we derive for Gauss–Markov signalmodels assuming that sensor spacings , are drawn

i.i.d. from an arbitrary density function . Error exponentsare seen to take on algebraically simple forms involving theparameters of the Markov processes and expectations over

of certain functions of . In Section II, we consider thenonstationary case. We show that the error exponent simplifiesto closed form expressions under the following (physicallymotivated) special cases of : (i) exponentially distributedsensors with placement density ; (ii) equally spaced sensorswith spacing , and (iii) the proceeding cases when sensorsfail with probability . For exponentially distributed sensorswith failures, the optimal sensor placement density is found inclosed form. In Section III, we consider the stationary case. Weevaluate the error exponent in closed form, in terms of the Psifunction, when corresponds to exponentially distributedsensors with failures. This expression is seen to simplify inthe limit of sparsely and densely placed sensors. Numericalsimulations are used throughout to show that the error exponentpredicts trends of the simulated error rate accurately, for evensmall data sizes. Thus, the analytic framework presented hereallows for an accurate and efficient optimization of systemresources that would not be possible otherwise. Finally, asa matter of organization style, we defer all proofs to theAppendices.

D. Notation

We use the following additional notation and definitions:(a) denotes expectation. When there is potential ambiguity,

denotes expectation with respect to a random variable ,and denotes expectation with respect to the hypothesis

, (b) denotes that the random variable is dis-tributed according to the density function ,denotes a zero mean Gaussian random variable with variance

, (d) if , then for someand all sufficiently small , and (e) boldface lowercase letters,e.g., , denote vectors.

II. NONSTATIONARY GAUSS–MARKOV MODEL

Nonstationary Gauss–Markov models have been used todescribe communications signals in a wide variety of contexts(e.g., see [12]–[14], and the references therein). Here, wemodel the signal under each hypothesis as a Gaussian signalthat evolves with a Markov correlation structure along anystraight line. Consider the observations taken by thesensors. We assume that the statistics of under aredescribed by

where describes the correlation strength between the

th and th sensors, and is impulsive (orinnovations) noise. We assume that for

. Let be the i.i.d. sequence of sensor spacings.We assume that , where is eithera continuous or discrete probability density function which isindependent of , and that is a function of , i.e.

Page 3: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

MISRA AND TONG: ERROR EXPONENTS FOR THE DETECTION OF GAUSS–MARKOV SIGNALS 3387

where is a hypothesis-dependent deterministicfunction (for an example, see (5)). We assume that and

are obtained by the FC.2 Finally, it will be convenient

to define and for use in expressions wherethe index is irrelevant.

Usually in sensor array processing applications, the data re-ceived across sensors is a function of: (a) the signal emitted bythe source, (b) the propagation characteristics of the environ-ment, (c) the sensor locations, and (d) the signal direction ofarrival. In the model above, , , and capture the ef-fects of (a)–(c) under (illustrative examples will be given inthe sequel). Although we do not specifically incorporate (d) intoour model, an example of how to do so is given in [19].

A. Derivation of the Error Exponent

Notice that summarizes the data obtained by theFC, where and . Evaluatingthe log-likelihood ratio in (2), we get

(3)

where follows since is independent of , followssince is a Markov process given , and follows from theform of the conditional Gaussian distribution. In Appendix I,we reference three useful forms of the Strong Law of LargeNumbers (SLLN). Using these results, we take the almost surelimit of (3) in Appendix II. We arrive at the following theorem.

Theorem 2.1: Suppose . Then the NP errorexponent for the detection of nonstationary Gauss–Markov sig-nals with randomly placed sensors is

(4)where .

The proof relies on bounding the correlation function ofcertain sequences appearing in (3), so as to apply SLLN-3(Appendix I), which is applicable to certain nonstationaryprocesses. We emphasize that can be either a continuousor discrete density (examples of each are given in Section II-B).If the sensor spacing is deterministic, the expectations abovedisappear, and it can be verified that (4) reduces to the errorexponent given previously in [18], as it must.

2If the FC is a mobile collection agent, it can measure fd g directly. Other-wise, sensors can send fd g to the FC, if they are equipped with GPS.

B. Example

We evaluate (4) explicitly for two models of the sensorspacing. In each, we assume that decays exponentially in

at a rate proportional to a constant , i.e.,

(5)

for , where is known at the FC, and where. We assume that sensors are in “failure” with proba-

bility independently from sensor to sensor. Specifi-cally, sensor is said to be in failure if its data is not re-ceived at the FC. Typical reasons for failure include mechanicalmalfunction or battery depletion at the sensor, and lost trans-missions due to interference at the FC. In addition, by choosing

appropriately, the analysis below incorporates probabilistictransmission schemes (in which a node transmits its data onlywith some probability; such a scheme was shown to provide theoptimal tradeoff between error exponents and energy consump-tion in [26]) and schemes in which sensors enter cycled “sleep”states.

1) Exponentially Spaced Sensors With Failures: Let thesensor spacings be exponentially distributed with placementdensity (or “arrival rate”) and failure rate . We have

(6)

where , and where (6) is valid for by the “thin-ning” property of the Poisson process [10, p. 287].3 Evaluating(4) with (5) and (6), we get

(7)where . The proof follows from algebraic manip-ulation and is omitted for brevity.

It may be suspected that there exists a maximizingin (7). To understand why, fix . As , the sen-sors will be placed arbitrarily close together, and their observa-tions will cover an arbitrarily small evolution of the(random) signal. Conversely, as , the sensors will beplaced arbitrarily far apart, and their observations will approachan i.i.d. set. Thus, the correlation in the signal field cannot beexploited in the detection process. Indeed, it can be verified that(7) is unimodal in and is maximized when

(8)

i.e., there exists an optimal placement density.Assume that time and energy are required for sensors to

transmit their data to the FC. It then makes sense for the FCto limit the number of sensors that are polled during each“detection process”. Given that sensors are to be polled, (8)provides guidance, increasingly accurately as increases, onwhich sensors should be polled.

3More formally, we have that d = ~d , where f ~d g is an i.i.d. se-quence with common probability density given by the right-hand side (RHS) of(6) with q = 0, and whereP [M = k] = q (1�q) for k 2 f1; 2; . . .g. It canbe verified that the probability density of d is f (x) = �(1� q)e .

Page 4: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

3388 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008

Fig. 2. NP detection performance versus placement density for exponentiallydistributed sensors. Plot of K and K(N) (for N 2 f10; 20;40;70g) versus �for A = 2, A = 0:1, � = 3=2, � = 1, � = � = 0:5, q = 0, and� = 0:01.

To study the usefulness of in predicting the behavior of theerror rate for finite , we use numerical simulations. Define thefinite- exponential error rate to be

and note that . In Fig. 2, we plot , de-termined from (7), and for , de-termined numerically, versus for , ,

, and (other parameters are given in the caption).It is seen that the behavior predicted by holds for .

is seen to be unimodal in for each . For example,we have that , while is maximized for

. Based on similar simulations over a wide va-riety of parameters, we conclude that the error exponent accu-rately predicts features of the error rate even for finite samplesizes.

2) Equispaced Sensors With Failures: Let sensors be equallyspaced with common spacing upon deployment, and let

be the failure rate. Then

(9)

for , is the probability density of the spacingbetween operational sensors. The error exponent is evaluatedusing (4), (5), and (9). We get

(10)

The proof follows from algebraic manipulation, including theuse of the geometric series, and is omitted for brevity. A gen-eral closed form expression for the optimal as a function of isnot available. However, an asymptotic analysis of (10) is given

Fig. 3. NP detection performance versus sensor spacing for geometrically dis-tributed sensors with q 2 f0:0;0:1; 0:3;0:5;0:7g for (a) the error exponentK ,and (b) the finite-N exponential error rate K(N) with N = 15. Parameters:� = � = 0:5, � = 1, � = 5, A = 4, and A = 0:1, and � = 0:01.

in Appendix III and reveals the following: As , is in-creasing in both and , and as , is decreasing in and. Theoretical curves generated from (10) are shown in Fig. 3(a),

where we plot versus for (otherparameters are given in the caption). It is seen that, for eachvalue of , the error exponent is unimodal in with an optimal

that decreases in .In Fig. 3(b), we plot when for the same pa-

rameters. These numerically determined curves are seen to coin-cide with theoretical predictions based on is seen tobe unimodal in with an optimal spacing that decreases with .When , increases in both and , and when ,

decreases in and . The magnitude of and arein relatively close agreement. Based on our simulations across awide range of parameters, we conclude that predicts the be-havior of with respect to and . In Fig. 4, we examinethe curve corresponding to in more detail. We plotand for (other parameters are as before).We see that increases towards with increasing .

Page 5: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

MISRA AND TONG: ERROR EXPONENTS FOR THE DETECTION OF GAUSS–MARKOV SIGNALS 3389

Fig. 4. NP detection performance versus sensor spacing for geometrically dis-tributed sensors with q = 0:5. We plot K and K(N) for N 2 f10; 20g.Parameters: � = � = 0:5, � = 1, � = 5, A = 4, A = 0:1, and� = 0:01.

III. STATIONARY GAUSS–MARKOV MODEL

Stationary Gauss–Markov models have also been used to de-scribe communications signals in several related applications(e.g., see [2], [8], and [17]). The model is defined as in the firstparagraph of Section II but with two changes. First, we redefinethe impulsive noise term to be

(11)

and second, we take the special case that , .Note that is indeed i.i.d. since is an i.i.d. se-quence. It easy to verify that , for all , under

, and that this Gauss–Markov model is stationary. We em-phasize that this model is not a special case of the nonstationarymodel of Section II. For example, (11) introduces dependencyof the random quantity into .

A. Derivation of the Error Exponent

We start by evaluating the log likelihood ratio. Following aprocedure similar to the nonstationary case, we get

(12)

In Appendix IV, we take the almost sure limit of (12). Doingso, we arrive at the following theorem.

Theorem 3.1: Suppose ,,

, andare each bounded from above. Then the NP error

exponent for the detection of stationary Gauss–Markov signalswith randomly placed sensors is given by

(13)

The proof makes use of SLLN-2 (Appendix I). Again, we em-phasize that may be either a continuous or discrete distri-bution. In the special case that is deterministic, (13) matchesthe NP error exponent that we previously derived in [19], as itmust. It is often the case that either or

, , i.e., the signal often decays “uniformlyfaster” with respect to distance under one of the hypotheses.When either relation holds, the technical conditions required byTheorem 3.1 simplify, as detailed in the following corollary.

Corollary 3.2: Suppose . Then a necessaryand sufficient condition for Theorem 3.1 to hold is

. Alternatively, suppose. Then sufficient conditions for Theorem 3.1 to hold

are and.

The proof of the first claim of the corollary follows by upperbounding the argument in each of the first three expectations (inthe conditions for Theorem 3.1 to hold) by 1. The proof of thesecond claim follows by upper bounding the argument in eachof the first three expectations by .

B. Example—Exponentially Spaced Sensors With Failures

We evaluate (13) for exponentially distributed sensors withindependent failures. The probability density of the spacing be-tween two consecutive operational sensors is given in (6), andwe again use the correlation model (5). In Appendix V, it isshown that the technical conditions of Theorem 3.1 hold, andthat substituting (5) and (6) into (13) yields

(14)

where again , and where is the Psi function[11, p. 943].

When sensors are sparsely placed , consecutivesignal samples approach statistical independence under both hy-potheses. Therefore, we expect to depend only on the signalpowers, . Using the fact that[11, p. 943], it can be shown that

(15)

When , the error exponent is 0, as expected. When sensorsare densely placed , we use the asymptotic expansion,

for large [11, p. 943] to find that

(16)

Page 6: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

3390 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008

Fig. 5. NP detection performance versus placement density for exponentiallydistributed sensors. Plot of K and K(N) (for N 2 f10; 30;50g, determinednumerically) versus � for A = 2, A = 0:2, � = 1=2, � = 1, q = 0, and� = 0:01.

The minimum with respect to occurs when , forwhich the error exponent is equals zero. Note that

implies that the minimum occurs for .This reflects the intuitive fact that detection is harder when thehypothesis with the more strongly correlated signal is also thehypothesis for which the signal variance is greater.

In Fig. 5 we plot and (for ) versusfor , , , and (other parame-

ters are given in the caption). It is seen that approachesas increases. Next, we determine if the behavior given in (15)and (16) holds for finite as well. In Fig. 6(a) and (b), we plot

for versus for progressively smaller valuesof , , and progressively larger values of

, , when (other parameters aregiven in the caption). In the case of small , note that the theoryholds. We see that as decreases, the error exponent decreases,approaching a minimum of zero at as predicted by (15).For large , Fig. 6(b) shows that the error exponent is minimizedas increases, approaching a zero around ,as predicted by (16). We conclude that the asymptotic analysisgiven above is useful in predicting trends even for finite .

IV. SUMMARY, DISCUSSION, AND FUTURE WORK

In this paper, we have derived error exponents for the de-tection of Gauss–Markov signals with random (ad hoc) sensorspacing. We assumed that spacings were drawn i.i.d. from anarbitrary distribution function , and assumed arbitrary cor-relation-decay functions . We provided exacterror exponents for both the nonstationary and stationary casesin (4) and (13), respectively. The error exponents were evaluatedusing specialized forms of the SLLN [20], [22], and were seento take on algebraically simple forms involving the parametersof the Markov process and expectations of polynomial functionsof over .

In the nonstationary case, we evaluated the error exponentin closed form for two special cases of : i) exponentiallydistributed sensors with placement density and failure rate(7), and ii) equally spaced sensors with spacing and failure

Fig. 6. NP detection performance versus sensor spacing for expo-nentially distributed sensors for (a) progressively smaller values of �,� 2 f0:2;0:1;0:0001g. (b) Progressively larger values of �, � 2 f2;5; 100g.Parameters: � = 1, � = R, A = 2, and A = 0:2, q = 0 and � = 0:01.

rate (10). In the first case, the optimal placement densitywas found in closed form (8). In the second case, properties ofthe error exponent for small and large density were provable(Section II-B). In the stationary case, we evaluated the errorexponent in closed form, in terms of the Psi function, when

corresponds to exponentially distributed sensors withfailures (14). This expression was seen to simplify in the limitof sparsely and densely placed sensors [(15) and (16), respec-tively]. Numerical simulations were used throughout to showthat the error exponent predicts trends of the simulated errorrate accurately for even small data sizes. Thus, the analysispresented here provides many nonobvious and key insights thatwould not be readily available otherwise.

We discuss assumptions made in this work, and detail avenuesof further research. First, we assumed that samples are collectedalong straight line. If sensors are not located on a straight line,one way to apply the results of this paper is as follows: Generalize

Page 7: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

MISRA AND TONG: ERROR EXPONENTS FOR THE DETECTION OF GAUSS–MARKOV SIGNALS 3391

to specify the correlation as a function of the Euclidean dis-tance separating two consecutively sampled sensors, and letbe the distribution on this Euclidean distance (this would dependon order in which sensors are sampled, and thus, would be a func-tion of the sampling algorithm). The results (4) and (13) still holdwith this new interpretation of (provided that the signal re-tains the Markov property, either directly or for some permuta-tion of the order in which samples are taken), and it would be in-teresting to see if there exist special cases for which these equa-tions simplify sufficiently to permit a careful analysis. Next, weassumed that sensor observations are free of thermal noise. Thisassumption is most readily justified when sensors have a high ob-servational SNR. However, the error exponent with noisy obser-vations is clearly of interest. Unfortunately, we are not aware ofany techniques which will permit an analysis of (2) in this case.For example, the SLLN techniques in presented in this paper, thespectral factorization method of [24], and the use of certain prop-erties of estimation theory as in [3] all fail to provide ready in-sights. Finally, we assumed that the distances are knownor learnt, and that the correlation-decay functions areperfectly known. It would be of interest to study the case where

is unknown and/or where must be estimated.

APPENDIX IUSEFUL FORMS OF THE STRONG LAW OF LARGE NUMBERS

We reference three forms of the SLLN that will be used inAppendices II and IV. The first version is commonly known andincluded for completeness.

Theorem SLLN-1 [22, p. 4] (SLLN for i.i.d. Sequences): Letbe an sequence of i.i.d. random variables with

and . Then

a.s.

Theorem SLLN-2 [22, p. 206] (SLLN for Weakly Sta-tionary Sequences): Let be a weakly stationarysequence of zero-mean random variables, i.e., ,

, and ,. If

for each and some , then

a.s.

Theorem SLLN-3 [20] (SLLN for general sequences): Sup-pose is a sequence of random variables, not neces-sarily zero mean, and with arbitrary correlation structure (notnecessarily stationary) that is characterized by the existence ofa and such that

for . Then for any

a.s.

APPENDIX IIPROOF OF (4)

We define the following notation for use in Appendices IIand IV: i) If is a function of , then operatoris shorthand for (a.s. in ); ii) define

; iii) all expectations are taken underhypothesis .

Before proceeding, we will find it useful to state the followingfacts. Proofs are omitted, as each can be verified through al-gebraic manipulation. Let . First, we have thatunder

from which it can be verified that

(17)

and

(18)

Now, we start from (3) and prove (4). We have

(19)

where follows since we can ignore deterministic terms thatdo not scale with in evaluating the limit, and follow

Page 8: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

3392 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008

from the fact that under , and fol-lows from the observation that is an i.i.d. sequencewith expected value 1, and applying the SLLN-1. Next, we eval-uate , , and .

Consider . Note that is an i.i.d. sequence with. Thus, by the SLLN-1.

Consider . Define . We showthat SLLN-3 can be applied to . We get

and for , that

and

where follows from (18). Now, we apply SLLN-3 to .We have

where is a constant which can clearly be chosento satisfy for all since by assumption of thetheorem. Fixing and taking in the conditionsfor SLLN-3, we conclude that .

Next, we evaluate . Define and

. We have

(20)

where follows from (18), and follows by applyingSLLN-3 to the zero-mean sequence where .The proof that the SLLN-3 can indeed be applied to the (non-stationary) sequence is nontrivial and lengthy. Therefore,we pause momentarily to remark that if we accept (20) withoutfurther proof, then by substituting our results for , , and

into (19), we have arrived at (4), as desired.Now, we use the remainder of this appendix to show that the

SLLN-3 can be applied to . Before proceeding, we state asecond set of facts. The first fact is that, for

(21)

This follows from. The second is that

(22)

for some . The proof is lengthy, and thus omitted. How-ever, the technique relies on expanding according to(17) and bounding each term in the expansion. The third fact isthat, for

(23)

for some . To see this, we first establish thatusing the law of iterated expectation and the ex-

pression for the fourth moment of a Gaussian random variable,and then apply (22).

We are now ready to show that the SLLN-3 applies to .Define

Our approach is to upper bound by lower bounding andupper bounding , and then showing that this upper bound sat-isfies the condition of SLLN-3. Hence, satisfies SLLN-3.

We lower bound for . We have (24), shown at thebottom of the next page, where follows from substitution of(18), and follows from neglecting the two positive terms inthe proceeding equation, and since .

Similarly, we upper bound for . First, we have

Page 9: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

MISRA AND TONG: ERROR EXPONENTS FOR THE DETECTION OF GAUSS–MARKOV SIGNALS 3393

where we have substituted (21) and then (18) in . Bounding,we have

(25)

where the constants (with respect to the subscripts and )clearly exist after using (22) in .

Combining (24) and (25), we have

(26)

where is a constant w.r.t and which clearlyexists.

Now we apply the SLLN-3 to the sequence . Note

where and are constants with respect to and. The term within square brackets in follows from direct

substitution of (26), and exists as a consequence (23). ThusSLLN-3 with and can be applied toconcluding the proof.

APPENDIX IIITAYLOR SERIES ANALYSIS OF (10)

From (10), define

(27)

As , we can ignore the higher-order terms in a Taylorseries expansion of about . We get

from which it is clear that is increasing in and .As , we can ignore terms dominated by , for some

, in . From (27), we get

Clearly, is decreasing in . Taking the derivative withrespect to , we get

(24)

Page 10: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

3394 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008

which is negative. Thus, is decreasing in .

APPENDIX IVPROOF OF (13)

We use the notation defined in the first paragraph of Ap-pendix II. Starting from (12), we prove (13). From (12), we have

(28)

where follows since we can ignore deterministic terms thatdo not scale with in evaluating the limit, the appearance of

in follows by definition, the expectation in followsfrom SLLN-1 since is an i.i.d. se-quence and since by as-sumption of the theorem, the first three terms in follow fromthe fact that under , and the last termin from the observation that is an i.i.d.sequence with expected value 1.

Next, we evaluate , and . To evaluate , defineand note that is an i.i.d. se-

quence. By the SLLN-1, we get

provided that , which is true by assump-tion of the theorem.

To evaluate , define. Then note4

that

where follows from assumption of the theorem, and that for

where the argument in has been omitted for brevity.Applying the SLLN-2, we find that .

To evaluate , note that

with and. We will show that the SLLN-2 can

be applied to the sequence , where . Note that, and

where follows since given , and byassumption of the theorem. Now, note

(29)

4The law of iterated expectation is necessary since z and a are not in-dependent without conditioning on d .

Page 11: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

MISRA AND TONG: ERROR EXPONENTS FOR THE DETECTION OF GAUSS–MARKOV SIGNALS 3395

for . Using the fact that, it can be verified that

Substituting into (29), using the fact that is an i.i.d.sequence, and summing over the resulting geometric series, weget

Finally, we have for ,

(30)

Clearly , where andare obvious from (30). Using this result in the SLLN-2, we findthat obeys the SLLN. Thus, . Finally, substitutingthe results for , , and into (28) and simplifying, we get(13), as desired.

APPENDIX VEVALUATION OF THE ERROR EXPONENT–FROM (13) TO (14)

The following three facts will be useful (see [11] and [11,p. 305])

(31)

(32)

and

(33)

for , , and . Also, we make use the following:Let be constants. Then

(34)

A proof follows:

where follows from a Taylor series expansion ofabout , from performing the integration, and

from (32).First, assume that the technical conditions of Theorem 3.1

hold. We evaluate the two terms containing expectations in (13).Recall . For the first term, we have

(35)

where follows by applying (34) and then (31).For the second term, we have

(36)

where follows from writing the expectation in terms of anintegral and decomposing the resultant fraction into two terms,and follows from applying (33) and then (31). Substituting(35) and (36) into (13) and simplifying yields (14).

We now verify that the technical conditions of Theorem3.1 hold. It is either the case that , implying from(5) that , or that , implyingthat . Thus, we can use Corollary 3.2 toverify the technical conditions. Suppose that . That

is evident from (35).

Page 12: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008 3385 Error ...acsp.ece.cornell.edu/papers/MisraTong08SP.pdf · 2008-08-07 · 3386 IEEE TRANSACTIONS ON SIGNAL

3396 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 8, AUGUST 2008

Suppose . Thatis also evident from (35). Additionally, note that

in this case, which implies

Thus, the technical conditions of Theorem 3.1 hold.

REFERENCES

[1] V. Aalo and R. Viswanathan, “On distributed detection with correlatedsensors: Two examples,” IEEE Trans. Aerosp. Electron. Syst., vol. 25,no. 3, pp. 414–421, May 1989.

[2] I. Abou-Faycal, M. Medard, and U. Madhow, “Binary adaptivecoded pilot symbol assisted modulation over Rayleigh fading chan-nels without feedback,” IEEE Trans. Commun., vol. 53, no. 6, pp.1036–1046, Jun. 2005.

[3] I. V. Basawa and D. J. Scott, Asymptotic Optimal Inference for Non-Ergodic Models, ser. Lecture Notes in Statist.. New York: Springer,1983, vol. 17.

[4] R. S. Blum and S. A. Kassam, “Optimum distributed detection of weaksignals in dependent sensors,” IEEE Trans. Inf. Theory, vol. 38, no. 3,pp. 1066–1079, May 1992.

[5] J.-F. Chamberland and V. V. Veeravalli, “Decentralized detection inwireless sensor systems with dependent observations,” presented at the2nd Int. Conf. Computing, Commun. Contrl. Technol., Austin, TX,Aug. 2004.

[6] J.-F. Chamberland and V. V. Veeravalli, “How dense should a sensornetwork be for detection with correlated observations?,” IEEE Trans.Inf. Theory, vol. 52, no. 11, pp. 5099–5106, Nov. 2006.

[7] J.-G. Chen and N. Ansari, “Adaptive fusion of correlated local deci-sions,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 28, no. 2,pp. 276–281, May 1998.

[8] M. Dong, L. Tong, and B. M. Sadler, “Optimal insertion of pilot sym-bols for transmissions over time-varying flat fading channels,” IEEETrans. Signal Process., vol. 52, no. 5, pp. 1403–1418, May 2004.

[9] E. Drakopoulos and C.-C. Lee, “Optimum multisensor fusion of corre-lated local decisions,” IEEE Trans. Aerosp. Electron. Syst., vol. 27, no.4, pp. 5–14, Jul. 1991.

[10] W. Feller, An Introduction to Probability Theory and Its Applications,3rd ed. New York: Wiley, 1968.

[11] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Prod-ucts, 4th ed. New York: Academic, 1983.

[12] R. Gray and L. Davisson, An Introduction to Statistical Signal Pro-cessing. Cambridge, U.K.: Cambridge, 2004.

[13] Y. Grenier, “Time-dependent ARMA modeling of nonstationary sig-nals,” IEEE Trans. Signal Process., vol. 31, no. 4, pp. 899–911, Aug.1983.

[14] M. Jachan and G. Matz, “Nonstationary vector AR modeling of wire-less channels,” presented at the 2005 IEEE Signal Processing Advancesfor Wireless Communications (SPAWC), New York, Jun. 2005.

[15] S. K. Jayaweera, “Sensor system optimization for Bayesian fusion ofdistributed stochastic signals under resource constraints,” presented atthe 2006 Int. Conf. on Acoust., Speech, Signal Process. (ICASSP),Toulouse, France, May 2006.

[16] M. Kam, Q. Zhu, and W. S. Gray, “Optimal data fusion of correlatedlocal decisions in multiple sensor detection systems,” IEEE Trans.Aerosp. Electron. Syst., vol. 28, no. 3, pp. 916–920, Jul. 1992.

[17] B. Liang and Z. J. Haas, “Predictive distance-based mobility manage-ment for multidimensional PCS networks,” IEEE/ACM Trans. Netw.,vol. 11, no. 5, pp. 718–732, Oct. 2003.

[18] H. Luschgy, A. Rukhin, and I. Vajda, “Adaptive tests for stochasticprocesses in the ergodic case,” Stoch. Process. Appl., pp. 45–59, 1993.

[19] S. Misra and L. Tong, “Error exponents for target-class detection withnuisance parameters,” presented at the 2007 Int. Conf. Acoust., Speech,Signal Process. (ICASSP), Honolulu, HI, Apr. 2007.

[20] B. Ninness, “Strong laws of large numbers under weak assumptionswith applications,” IEEE Trans. Autom. Control, vol. 45, no. 11, pp.2117–2122, Nov. 2000.

[21] H. V. Poor, An Introduction to Signal Detection and Estimation. NewYork: Springer-Verlag, 1994.

[22] W. Stout, Almost Sure Convergence. New York: Academic, 1974.[23] Y. Sung, S. Misra, L. Tong, and A. Ephremides, “Cooperative routing

for distributed detection in large sensor networks,” IEEE J. Sel. AreasCommun., vol. 25, no. 2, pp. 471–483, Feb. 2007.

[24] Y. Sung, L. Tong, and H. V. Poor, “Neyman–Pearson detection ofGauss–Markov signals in noise: Closed-form error exponent and prop-erties,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1354–1365, Apr.2006.

[25] Y. Sung, X. Zhang, L. Tong, and H. V. Poor, “Sensor configurationand activation for field detection in large sensor arrays,” IEEE Trans.Signal Process., vol. 56, no. 2, pp. 447–463, Feb. 2008.

[26] E. Tuncel, “Extensions of error exponent analysis in hypothesistesting,” presented at the 2005 Int. Conf. Inf. Theory (ISIT), Adelaide,Australia, Sep. 2005.

[27] I. Vajda, Theory of Statistical Inference and Information. Dordrecht,The Netherlands: Kluwer Academic, 1989.

[28] P. K. Varshney, Distributed Detection and Data Fusion. New York:Springer-Verlag, 1997.

Saswat Misra (M’02) received the B.S. degreein electrical engineering from the University ofMaryland, College Park, in 2000 and the M.S.degree in electrical engineering from the Universityof Illinois at Urbana-Champaign in 2002. He is cur-rently working towards the Ph.D. degree at CornellUniversity, Ithaca, NY.

He has been a Research Scientist with the AdelphiLaboratory Center of the Army Research Laboratorysince 2001.He is currently studying dynamic spec-trum access for cognitive radio systems. He previ-

ously studied routing and security in sensor networks, and pilot placement forcellular communications. He has published several technical papers and coin-vented two patents (pending) in these areas.

Lang Tong (F’05) received the B.E. degree from Ts-inghua University, Beijing, China, in 1985, and thePh.D. degree in electrical engineering from the Uni-versity of Notre Dame, Notre Dame, IN, in 1991.

He joined Cornell University, Ithaca, NY, in 1998,where he is now the Irwin and Joan Jacobs Professorin Engineering. Prior to joining Cornell University,he was with West Virginia University and the Univer-sity of Connecticut. He was also the 2001 Cor WitVisiting Professor at the Delft University of Tech-nology, The Netherlands. He was a Postdoctoral Re-

search Affiliate with the Information Systems Laboratory, Stanford University,Stanford, CA, in 1991. He is a coauthor of five student paper awards. His re-search is in the general area of statistical signal processing, communication sys-tems, and networks.

Dr. Tong received the Outstanding Young Author Award from the IEEE Cir-cuits and Systems Society, the 2004 Best Paper Award (with M. Dong) from theIEEE Signal Processing Society, and the 2004 Leonard G. Abraham Prize PaperAward from the IEEE Communications Society (with P. Venkitasubramaniamand S. Adireddy). He received the Young Investigator Award from the Office ofNaval Research.