9
Published in IET Radar, Sonar and Navigation Received on 25th February 2011 Revised on 25th May 2012 Accepted on 2nd August 2012 doi: 10.1049/iet-rsn.2012.0034 ISSN 1751-8784 Compressive feature and kernel sparse coding-based radar target recognition Shuyuan Yang 1 , Yonggang Ma 1 , Min Wang 2 , Dongmei Xie 1 , Yun Wu 1 , Licheng Jiao 1 1 Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Department of Electrical Engineering, Xidian University, Xian 710071, Peoples Republic of China 2 National Key Lab of Radar Signal Processing, Department of Electrical Engineering, Xidian University, Xian 710071, Peoples Republic of China E-mail: [email protected] Abstract: In this study, the authors exploit the sparse nature of radar targets, and propose a universal, target-oriented compressive featureand kernel sparse coding-based radar target recognition approach via the recent developed compressive sensing theory. Inspired by the visual attention mechanism, pulse contourlet transform is proposed to derive the target- oriented compressive features, and a kernel sparse coding classier is advanced inspired by the fact that kernel trick can make the features more clustered in higher dimensional space, so resulting in accurate and robust recognition of targets. Some experiments are taken on recognising three types of ground vehicles in the moving and stationary target acquisition and recognition public release database, to compare the performance of the proposed scheme with its counterparts, and the results prove its efciency. 1 Introduction With the increasing demands for target identication in radar applications, the automatic target recognition (ATR) technology via radar images has become an active researching area in recent years [1, 2]. The modern radar imagery, such as synthetic aperture radar (SAR) imagery and inverse SAR (ISAR) imagery, has become increasingly popular over the last decades and widely exploited for target recognition [36]. In the ATR of SAR/ISAR images, the feature extraction and classier design are two important issues. Although SAR/ISAR imagery can separate much of the ground clutters from the target using the relative motion of the radar platform with respect to the target, there are some difculties in using them for target recognition. Firstly, the imagery is often characteristic of large amount of data and the targets are often relatively sparse in the imagery, which makes the commonly used feature extraction approaches inefcient. Secondly, although some machine learning technologies have been introduced to construct the classier, including support vector machine (SVM) [4], Adaboost [5], manifold learning [6], they usually require a long training process and their model parameters are difcult to choose. Thirdly, the SAR/ISAR imagery is prone to distortion because of the irregular motion of a manoeuvring target or a platform, and the amount of distortion is cumulative over the duration of the SAR imaging time [3], which requires more robust recognition schemes. Radar data have proved to be compressible with no signicant losses in its applications [7, 8]. In this paper, we propose a universal and information-preserving feature extraction approach inspired by a new developed concept of compressive sensing (CS) [9, 10]. The basic idea of CS is that a signal, unknown but supposed to be compressible, can be subjected to fewer measurements than the nominal number of samples, and yet be accurately reconstructed. So the compressive sampling on the imagery data can be served as a universal target features for recognition, which is called the compressive featurein this paper. Additionally, in order to make the features target-oriented, we design a pulse contourlet transform (PCT) to adaptively calculate the saliency map of imagery inspired by the visual attention mechanism, so making the compressive featuremore representative of targets shape and thus helpful for recognition. Moreover, we also indicate a possible way of combining this feature extraction scheme with the CS-based SAR imaging [1121], which is detailed in Section 2.1. When the features are extracted from the imagery data, we extend the sparse coding-based classier (SCC) [8, 22] to the empirical kernel projection space inspired by the fact that kernel trick can make the features more clustered. A kernel sparse coding classier (KSCC) is proposed to identify the label of unknown test targets, which cannot only classify the targets correctly but also reduce the noises existing in images. Some experiments are taken on recognising three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database to investigate the performance of the proposed method. In summary, the contributions of the paper are www.ietdl.org IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755763 755 doi: 10.1049/iet-rsn.2012.0034 & The Institution of Engineering and Technology 2013

Compressive feature and kernel sparse coding-based radar target recognition

  • Upload
    min

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

www.ietdl.org

IE

d

Published in IET Radar, Sonar and NavigationReceived on 25th February 2011Revised on 25th May 2012Accepted on 2nd August 2012doi: 10.1049/iet-rsn.2012.0034

T Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763oi: 10.1049/iet-rsn.2012.0034

ISSN 1751-8784

Compressive feature and kernel sparse coding-basedradar target recognitionShuyuan Yang1, Yonggang Ma1, Min Wang2, Dongmei Xie1, Yun Wu1, Licheng Jiao1

1Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Department of Electrical

Engineering, Xidian University, Xi’an 710071, People’s Republic of China2National Key Lab of Radar Signal Processing, Department of Electrical Engineering, Xidian University, Xi’an 710071,

People’s Republic of China

E-mail: [email protected]

Abstract: In this study, the authors exploit the sparse nature of radar targets, and propose a universal, target-oriented‘compressive feature’ and kernel sparse coding-based radar target recognition approach via the recent developed compressivesensing theory. Inspired by the visual attention mechanism, pulse contourlet transform is proposed to derive the target-oriented compressive features, and a kernel sparse coding classifier is advanced inspired by the fact that kernel trick can makethe features more clustered in higher dimensional space, so resulting in accurate and robust recognition of targets. Someexperiments are taken on recognising three types of ground vehicles in the moving and stationary target acquisition andrecognition public release database, to compare the performance of the proposed scheme with its counterparts, and the resultsprove its efficiency.

1 Introduction

With the increasing demands for target identification in radarapplications, the automatic target recognition (ATR)technology via radar images has become an activeresearching area in recent years [1, 2]. The modern radarimagery, such as synthetic aperture radar (SAR) imageryand inverse SAR (ISAR) imagery, has become increasinglypopular over the last decades and widely exploited fortarget recognition [3–6]. In the ATR of SAR/ISAR images,the feature extraction and classifier design are twoimportant issues. Although SAR/ISAR imagery canseparate much of the ground clutters from the target usingthe relative motion of the radar platform with respect to thetarget, there are some difficulties in using them for targetrecognition. Firstly, the imagery is often characteristic oflarge amount of data and the targets are often relativelysparse in the imagery, which makes the commonly usedfeature extraction approaches inefficient. Secondly, althoughsome machine learning technologies have been introducedto construct the classifier, including support vector machine(SVM) [4], Adaboost [5], manifold learning [6], theyusually require a long training process and their modelparameters are difficult to choose. Thirdly, the SAR/ISARimagery is prone to distortion because of the irregularmotion of a manoeuvring target or a platform, and theamount of distortion is cumulative over the duration of theSAR imaging time [3], which requires more robustrecognition schemes.Radar data have proved to be compressible with no

significant losses in its applications [7, 8]. In this paper,

we propose a universal and information-preserving featureextraction approach inspired by a new developed conceptof compressive sensing (CS) [9, 10]. The basic idea ofCS is that a signal, unknown but supposed to becompressible, can be subjected to fewer measurementsthan the nominal number of samples, and yet beaccurately reconstructed. So the compressive sampling onthe imagery data can be served as a universal targetfeatures for recognition, which is called the ‘compressivefeature’ in this paper. Additionally, in order to make thefeatures target-oriented, we design a pulse contourlettransform (PCT) to adaptively calculate the saliency mapof imagery inspired by the visual attention mechanism, somaking the ‘compressive feature’ more representative oftargets shape and thus helpful for recognition. Moreover,we also indicate a possible way of combining this featureextraction scheme with the CS-based SAR imaging[11–21], which is detailed in Section 2.1.When the features are extracted from the imagery data,

we extend the sparse coding-based classifier (SCC)[8, 22] to the empirical kernel projection space inspiredby the fact that kernel trick can make the features moreclustered. A kernel sparse coding classifier (KSCC) isproposed to identify the label of unknown test targets,which cannot only classify the targets correctly but alsoreduce the noises existing in images. Some experimentsare taken on recognising three types of ground vehicles inthe moving and stationary target acquisition andrecognition (MSTAR) public release database toinvestigate the performance of the proposed method. Insummary, the contributions of the paper are

755& The Institution of Engineering and Technology 2013

www.ietdl.org

1. We exploit the sparse nature of radar targets and visualattention mechanism, to propose a universal andtarget-oriented ‘compressive feature’. A new PCT isproposed to adaptively extract more contour information oftargets in radar images in determining the saliency map.Moreover, we also indicate a possible way of combiningthis feature extraction scheme with the CS-based SARimaging.2. A KSCC, is proposed to identify the label of unknown testtargets, by extending the SCC [22] to the empirical kernelprojection space inspired by the fact that kernel trick cancapture the nonlinear similarity of features. Using the kernelsparse coding, this classifier is characteristics of accuraterecognition and automatic reduction of the noises existed inimagery data.

The rest of this paper is organised as follows. Section 2describes the derivation of ‘compressive feature’. In Section3, the KSCC is depicted. In Section 4, some experimentsare taken to investigate the performance of our proposed

Fig. 1 Targets to be identified

a BMP2 tank imagery datab BTR70 personnel carrier imagery datac T72 tank imagery datad BMP2 optical imagee BTR70 optical imagef T72 optical image

756& The Institution of Engineering and Technology 2013

method. The conclusions are finally summarised inSection 5.

2 Compressive feature derived by PCT

Radar data prove to be compressible with no significantlosses for most of the applications in which it is used [7],for example, many SAR/ISAR images exhibit sparseproperty that can be exploited in target recognition.Figs. 1a–c show the SAR images of BMP2 tank(Fig. 1d ), BTR70 personnel carrier (Fig. 1e) and T72tank (Fig. 1f ) respectively, which came from the MSTARpublic release database.From it we can see that compared with the optical images

of three targets, the SAR images are severely polluted by thespeckle noise, and the targets are centered at the imagery andrelatively sparse to the whole imagery. If traditional featureextraction approaches are applied on the images, thefeatures are not representative of targets.

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

www.ietdl.org

2.1 ‘Compressive feature’ extraction

The basic idea behind CS theory is that a signal, unknown butsupposed to be compressible, can be subjected to fewermeasurements than the nominal number of samples, and yetbe accurately reconstructed [9, 10]. Assume that one canreduce a signal f∈ Rn to a shorter one v∈ Rm (m≪ n) byusing a compressive sampling matrix Φ∈ Rm × n

v = Ff (1)

Obviously determining f from the observed v is an ill-posedinversion problem, whose solution is not unique. However,CS theory proves to be capable of recovering f under thesparsity prior and incoherence assumption, that is, f issparse in certain basis set (or dictionary) A: f =Aα that isincoherent with the sensing matrix Φ and α is sparseenough. To recover the sparse coefficient α, one shouldsolve such an l0-norm optimisation problem

l0( )

:mina

a‖ ‖0subject to v = FAa

{(2)

This idea has been applied in many compressive radarimaging approaches to recover f using (2) [11–21].As v is representative of the signal f, we propose the idea of

directly taking the compressive sampling as the features oftargets. In our proposed method, we use the compressivemeasurement as the features of v of the SAR imagery data,which is universal and can preserve the information of f.Fig. 2a plotted a ‘compressive feature’ extraction schemeon the SAR imagery data. The classical SAR imaging isadopted to obtain an imagery, from which we can calculatea saliency map. As soon as the saliency map is determined,

Fig. 2 Flowcharts of ‘compressive feature’ extractiona Compressive feature extraction on the imagery datab Compressive feature extraction on the raw data

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

a ‘compressive feature’ extraction approach is proposed bydiscriminating ‘most salient’ and ‘non-salient’ regions in theimagery, which will be depicted in detail in Section 2.3.Additionally, the proposed method can also be used in

CS-based SAR imaging. CS can be either used to reducethe sampling rate of classical analogue–digital-converter(ADC), or to reduce the number of pulses in SARimaging. Some researchers defined them as ‘analoguecompressive sensing’ and ‘discrete compressive sensing’,respectively. In the ‘analogue compressive sensing’, thecompressive sampling is performed on the signals andthe classical ADC is replaced by new compressivesampling devices, such as the low-rate analogue-information-converter [21, 23], random sampling converter[24] and so on. In recent years, some representative workon ‘discrete compressive sensing’-based SAR/ISARimaging also have done [18, 19]. In the paper, we alsopresent a possible realisation of ‘compressive feature’extraction in the ‘discrete compressive sensing’-basedSAR imaging, as shown in Fig. 2b.In this scheme, the feature extraction is a two-stage

sampling process, which includes the saliency generationprocess and the sampling ratio adjusting process. Low-ratecompressive sampling that can work on two modes areemployed. In mode 1, the low-rate sampling is used toobtain raw data of different targets, followed by theconventional range compression and azimuth focusing. Inthis saliency generation process, as soon as the SARimagery is obtained, we derive the saliency map from it. Inthe sampling ratio adjusting process, the samplingcontrolling signals produced from the saliency map willassign more sampling resources for more salient regions,which is mode 2 named as the saliency-based adaptivesampling. This adaptive sampling will produce the desired‘compressive features’.

757& The Institution of Engineering and Technology 2013

www.ietdl.org

2.2 PCT-based saliency map

As mentioned above, the targets in SAR images are usuallysparse, if the random sampling is performed on radarimagery in Figs. 1a–c, many observations do not carry thetarget information, which will degrade the subsequentrecognition. So in the ‘compressive feature’ extraction, wetake human visual attention mechanism into considerationbecause human vision would pay more attention to salientregions. Yu et al. [25] proposed a two-dimensional (2D)discrete cosine transform (DCT)-based saliency map togenerate compressive sampling data. However, themulti-scale and multi-directional characteristics of thehuman visual sensing are not considered.Contourlet transform is one of the most popular multi-scale

geometric analyses developed in recent years [26], and theContourlet coefficients of radar imagery exhibit adistribution of imagery at different scales and directions.Considering the differences of targets shape, a new PCT isproposed to exploit the saliency information of images. Thesaliency map of imagery X based on PCT is calculatedusing the following equation

P = sign CT(X)( )

F = CT−1(P)∣∣ ∣∣

Saliencymap: I = G∗F2

(3)

where CT, CT−1 represent the contourlet transform andinverse contourlet transform, respectively, G is a 2DGaussian low-pass filter, * is a convolution operation, andthe notation ‘sign(·)‘ is the signum function

sign(t) = +1, t ≥ 0−1, t , 0

{(4)

sign(CT(·)) is called PCT because it only retains the signs ofthe contourlet coefficients. The output P is binary (incitationor suppression) that mimics the neuronal pulses in the humanbrain. PCT can accomplish the computation of saliency in thecontourlet domain. Compared with DCT, PCT can extractedges and contours information of targets in radar images,and the salient locations are highlighted by normalising thecontourlet coefficients. To recover the saliency informationin the visual space, we conduct an inverse contourlettransform on the binary matrix P. Fig. 3a plotted a SAR

Fig. 3 Saliency map and the ‘most salient’ blocks determined by PCT

a Original imageb Saliency mapc ‘Most salient’ blocks

758& The Institution of Engineering and Technology 2013

imagery and Fig. 3b plotted the saliency map of Fig. 3aextracted by PCT, and Fig. 3c plotted the ‘most salient’regions (white blocks) determined by PCT. From them wecan see that the road and the buildings in the SAR imageattract more saliency attention than the farmland, so highsampling ratio is required for these regions.

2.3 Saliency map-based compressive features

In our scheme, we calculate the saliency map I of the imageryX using (3), and divide I into small b × b blocks. Theblocks are processed in raster-scan order in the image, fromleft to right and top to bottom. For the mth b × b blockBm(m = 1, .., N ) (N is the number of blocks in the imagery),the salient level Im is defined to evaluate the importance of Bm

Im =∑

(i,j)[Bm

I(i, j) (5)

Among the N blocks, k ‘most salient’ regions (or blocks){Bs}|s = 1,⋯,k. are selected, according to the following rule

Bs = Bj Ij ≥ b×∑Nm=1

Im

∣∣∣∣∣{ }

(6)

where β is a positive threshold parameter. Here Bs indicatesthe salient blocks vector. Similarly, the (N − k) non-salientblocks vector is then denoted as {Bns}|ns = 1,⋯,N− k. As longas these regions are determined, more sampling data areselected in this region. That is, we use different samplingmatrices Φ1 and Φ2 to sample {Bs} and {Bns} respectively.There are more rows in Φ1 than Φ2, which indicatesassigning more sampling data for {Bs} than {Bns}.The ‘compressive feature’ v of the radar imagery can bewritten as

v = F1B1, . . . , F1Bk ,︸︷︷︸Bs{ }

F2Bk+1, . . . , F2BN︸︷︷︸Bns{ }

⎡⎢⎣

⎤⎥⎦ [ Rd (7)

According to Candès and Wakin [10], random (Gaussian orBernoulli distribution) sampling matrices Φ1 and Φ2 areoften used, because they prove to have low incoherencewith almost all the orthogonal matrix. Therefore in thispaper, the Gaussian random matrix is adopted.

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

www.ietdl.org

3 Kernel sparse coding classifier

In this section, we firstly discuss the SCC, and then depictedthe KSCC for target recognition in detail.

3.1 Sparse coding classifier

SCC employs the principle that the test sample can berepresented as a linear combination of some trainingsamples. That is, given sufficient training samples of certainobject class, any new sample from the same class willapproximately present as the linear combination of thetraining samples [22]. The linear combination coefficientsare remarkable for the class the test sample belongs to,whereas for the other classes, the linear combinationcoefficients are nearly closed to zero.Assume a training matrix (or dictionary) A

^

for entiretraining set as the T = ∑K

i=1 ti training samples of all Kobject classes

A^

= A1, A2, . . . , AK

[ ] = v1,1, v1,2, . . . , vK ,tK

[ ](8)

where vi, j is the ‘compressive feature’ of the jth sample in theith object class; ti is the number of the samples in the ithobject class. The linear representation of the sample v,which is a new (test) sample from the ith object class, canbe represented as

v = A^

a (9)

where a = [0, . . . , 0, ai,1, ai,2, .., ai,ti, 0, . . . , 0]T is a sparse

coefficient vector whose entries are zero except thoseassociated with the ith object class. The estimation of α canbe obtained by solving such an l0 optimisation problemshown in (10)

a=mina

a‖ ‖0subject to v = A

^

a

{(10)

When the coefficients vector a is estimated, one canapproximate the test sample vi(i = 1, . . . , K) as

vi = A^

di(a) (11)

by using only the coefficients with respect to the ith class.Here di(a) is a new vector whose only non-zero entries arethe entries in α that are associated with class i. Then theresidual ri vi

( )between v and vi can be calculated. We can

classify v based on these residuals by assigning it to theobject class that has the minimal residual

mini

ri vi( ) = v− vi

∥∥ ∥∥2= v− A

^

di(a)∥∥∥ ∥∥∥

2(12)

This means that the best estimation of the class i for the testsample is the one which can present minimal residuals.

3.2 Kernel sparse coding classifier

In order to further improve the recognition accuracy in SCC,in this paper a Gaussian kernel projection trick is used toimplement the high dimensional projection of samples anda new classifier, called empirical KSCC, is proposed. It

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

captures the nonlinear similarity of samples and increasesthe differences of samples in different catalogues, soboosting the performance of sparse coding on the originalsamples.Suppose there is a feature mapping function Rd→ RT (T is

the total number of training samples) that maps the featurevector to the high dimensional space: v→ φ(v), then

f(A^

) = f v1,1( )

, f v1,2( )

, . . . , f vk,nk

( )[ ][ RT×T (13)

We substitute the mapped samples (including the trainingsamples and the test sample) to the formulation of sparsecoding, and arrive at kernel sparse coding

f(v) = f(A^

)a (14)

where α∈ RT. In the above formula, the left side and right sideare both product by an item f(A

^

)T, then (14) can be rewrittenas [27]

f(A^

)T · f(v) = f(A^

)T · f(A^

)a (15)

Reformulated the formula (15) as

kf(A^

), f(v)l = kf(A^

), f(A^

)la (16)

Note that φ(v1)Tφ(v2) =K(v1, v2), here we use Gaussian kernel

with the form

K v1, v2( ) = exp − v1 − v2

∥∥ ∥∥22s2

( )(17)

So formula (16) becomes

K A^

, v( )

= K A^

, A^( )

a (18)

We can see that (18) has a similar form with (9) when v, A^

arereplaced by its empirical kernel projection K A

^

, v( )

,K A

^

, A^

( ), respectively. The identification of the label can

be reduced to

mini

ri vi( ) = K A

^

, vi

( )− K A

^

, A^( )

di(a)∥∥∥ ∥∥∥

2(19)

3.3 ART of SAR imagery based on the‘compressive feature’ and KSCC

Based on the above discussion, we can see that the compressivemeasurements can be served as ‘compressive feature’ of radartargets. Different sampling matrixes are applied on the ‘mostsalient’ and ‘non-salient’ regions that are determined by thesaliency map of the imagery, and consequently make thefeature vector helpful for recognition. The flowchart of theproposed method is shown in Fig. 4.The procedure of our proposed SAR ATR algorithm can be

described as the following procedure.

759& The Institution of Engineering and Technology 2013

www.ietdl.org

Algorithm 1: Procedure of our proposed SAR ART scheme

Input: Some training SAR imageries and the test imageriesOutput: The label of the test sampleStep 1: Calculate the saliency maps of the training SARimageries and the test imagery via the contourlet transform;Step 2: Determine the ‘most salient’ and ‘non-salient’ regionsof each training and test imagery, and formulate the‘compressive feature’ v by using Φ1 and Φ2, respectively.Step 3: Normalise the features to obtain the training sampleAi [ Rd×ti(i = 1, .., T ) and construct the dictionaryA^

= A1, A2, . . . , AK

[ ][ Rd×T for classification.

Step 4: Calculate the ‘compressive feature’ of the unknowntarget by using the same Φ1 and Φ2 with that of step 2, andnormalise it to obtain the test sample v.Step 5: Calculate the empirical kernel projectionK A

^

, v( )

, K A^

, A^

( ), and solve the sparse coefficients a in

(18) using accelerated iterative hard thresholding algorithm.Step 6: Calculate di(a) for each class.Step 7: Identify the label of the test imagery using (19).

4 Experimental results

In this section, we investigate the performance of our proposedmethod. The publicly available portion of the MSTARdatabase [28] is used in our experiments. The log-intensityimages collected at a 17° depression angle were used fortraining, whereas the images collected at 15° were used fortesting. Each imagery is of size 128 × 128, and the totalnumber of training data is 1161 and the total number oftesting data is 978. All the simulations are conducted inMATLAB 7.1 on PC with Intel Pentium4/3G/1G.

4.1 Experiment 1: Comparison of randomsampling with the saliency-based sampling

In this test, we compare the proposed saliency-basedsampling scheme with a random sampling. Considering therandom initialisation of the recovery algorithm in Algorithm

Fig. 4 Flowchart of our proposed target recognition scheme

760& The Institution of Engineering and Technology 2013

1, 20 independent experiments are taken and the averageresults are calculated. The compressive features with thedimensionalities 1638, 2043, 4096 and 8172 (10, 12, 25and 50% of its original dimensionality, respectively) areconsidered. Supposing that the ratio of the number of rowsof Φ1 to that of Φ2 takes 2:1, 3:1 and 4:1, we compare therandom sampling and the saliency-based sampling. Thesaliency maps are calculated by PDCT [25] and ourproposed PCT, respectively. In PDCT and PCT, b = 8, β = 5and N = 1280. In this experiment a simple classifier, KNNis used [29] (K = 5) to classify the ‘compressive feature’.The recognition result of random sampling (denoted as RS),PDCT and PCT are shown in Table 1. From the table wecan see that when the number of measurements increases,the three methods can obtain higher recognition ratio. Atdifferent dimensionalities, the saliency-based compressivesampling schemes outperform the random sampling. Ourproposed PCT method has some improvements over PDCT.As to the sampling ratio of ‘most salient’ to ‘non-salient’regions, the result at 3:1 ratio is better than that of 2:1 and 4:1.

4.2 Experiment 2: Comparison of saliency-basedsampling with other related method

In this experiment, we investigate the performance of thesaliency maps determined by different methods. Wecompare the proposed PCT method with other relatedmethods, including (i) target/clutter segmentation schemeusing the thresholding scheme, with the threshold set asthe mean of the imagery data; (ii) pulse Fourier transform(PFT) based on Fourier components, where the Fouriertransform is used to replace the DCT in PDCT; and (iii)PDCT. Fig. 5 compares the saliency regions determinedby different methods when the dimensionality of the datais reduced to 1638, where Figs. 5a, f and k plot the radarimage data of three targets (BMP2, BTR70,T72)respectively. Using the scheme described in Section 2, wecan discriminate the salient and non-salient regions of theimagery. Figs. 5b, g and l plot the salient blocks

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

Table 1 Recognition ratio (%) of different methods

Dimensionality Targets RS PDCT [25] PCT

ratio ratio

2:1 3:1 4:1 2:1 3:1 4:1

1638 T72 tank 70 72 75 71 77 78 75BTR70 63 65 70 68 71 74 71BMP2 69 70 72 72 75 77 73

2043 T72 tank 77 78 78 76 82 83 83BTR70 67 69 73 71 71 74 70BMP2 75 78 78 79 83 84 83

4096 T72 tank 85 88 88 88 92 95 94BTR70 74 75 76 74 79 79 81BMP2 82 83 82 85 87 89 89

8172 T72 tank 89 89 91 90 94 96 95BTR70 76 77 77 80 81 84 87BMP2 88 90 92 92 93 96 95

www.ietdl.org

determined by the thresholding scheme; Figs. 5c, h and mplot the salient blocks determined by PFT; Figs. 5d, i andn plot the salient blocks determined by PCT and Figs. 5e,j and o plot the salient blocks determined by PCT. Fromit we can see that thresholding method can locate thetarget, but the location is not very accurate. However,

Fig. 5 Saliency regions determined by different methods

a BMP2b Thresholdc PFTd PDCTe PCTf BTR70g Thresholdh PFTi PDCTj PCTk T72l ThresholdmPFTn PDCTo PCT

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

PFT using Fourier coefficients will overemphasise thesaliency regions. There are more salient regions in andaround the targets in our proposed PCT method, which isbetter than that of paper [25].It should be noted that the larger β is, less salient blocks

will be selected. In the figure, because of the shadow and

761& The Institution of Engineering and Technology 2013

Fig. 6 Recognition ratio (%) and consumed time (s) of different classifiers

a Ratio d = 3b Ratio d = 10c Ratio d = 20d Ratio d = 30e Ratio d = 40f Ratio d = 50g Time d = 3h Time d = 10i Time d = 20j Time d = 30k Time d = 40l Time d = 50

www.ietdl.org

shelter, not all of the targets are regarded as salient regions,however, if we low the threshold beta, more blocks will beincluded in the salient regions, and more regions withouttargets will also be selected.

4.3 Experiment 3: Comparison of KSCC with SCC

In this test, we reduce the imagery to very low dimensionalityand investigate the performance of our proposed KSCCclassifier. The reduced dimensionality of samples variesfrom 3 to 60, and three classifiers, KSCC, SCC and SVMare considered. The dimensionalities are reduced by randomGaussian matrix and our proposed PCT method. In PCT, b= 8, β = 5N = 1280 and r = 1. Figs. 6a–f gives the overallaccuracy (%) of three methods when the dimensionality isreduced to d = 3, 10, 20, 30, 40, 50 respectively, and PCTand PDCT-based saliency map are both used to extractfeatures. Figs. 6g–l gives the consumed time (s). FromFig. 6 we can see that PCT outperforms random sampling

762& The Institution of Engineering and Technology 2013

with a little increase on the consumed time, and KSCCoutperforms SCC at different dimensionalities. SVMoutputs highest classification accuracy among the methods.However, compared with SVM, the consumed time ofKSCC and SCC are remarkably reduced.

5 Conclusions

In this paper, under the framework of recent proposed CStheory, a compressive feature and kernel sparsecoding-based automatic radar target recognition method isproposed. A PCT is developed to calculate the saliencymap of images, from which we derive an adaptivecompressive feature extraction scheme by imitating thehuman attensional mechanism and exploiting the sparsenature of radar targets. Moreover, we also indicate apossible way of combining this feature extraction schemewith the CS-based SAR imaging. Finally, a KSCC isadvanced to classify the ‘compressive features’. Some

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

www.ietdl.org

experiments are taken on recognising three types of groundvehicles in the MSTAR public release database, to comparethe performance of our proposed scheme with the casewhen saliency information and kernel trick are not used.The experimental results show that (i) it can achieveaccurate recognition using a universal feature came fromcompressive sampling, and it does not require neither anypreprocessing or explicit pose estimation of radar data; (ii)the ‘compressive feature’ is more representative than itscounterpart by using PCT to generate saliency map; and(iii) KSCC outperforms SCC in recognising targets in SARimagery, and outputs comparable result with SVM andremarkable reduction in time complexity.

6 Acknowledgment

The authors would like to thank the anonymous reviewers fortheir constructive comments, which resulting muchimprovement of the clarity of this paper. This work wassupported by the Foreign Scholars in University Researchand Teaching Programmes (no. B07048), National ScienceFoundation of China under grant numbers 61072108,60971112, 61173090, and China programmeNCET-10-0668.

7 References

1 Aldhubaib, F., Shuley, N.V.: ‘Radar target recognition based onmodified characteristic polarization states’, IEEE Trans. Aerosp.Electron. Syst., 2010, 46, (4), pp. 1921–1933

2 Eryildirim, A., Onaran, I.: ‘Pulse Doppler radar target recognition usinga two-stage SVM procedure’, IEEE Trans. Aerosp. Electron. Syst., 2011,47, (2), pp. 1450–1457

3 Zhao, Q., Jose, C.P., Victor, B., Xu, D.X., Wang, Z.: ‘Synthetic apertureradar automatic target recognition with three strategies of learning andrepresentation’, Opt. Eng., 2000, 39, (5), pp. 1230–1244

4 Zhao, Q., Jose, C.P.: ‘Support vector machines for SAR automatic targetrecognition’, IEEE Trans. Aerosp. Electron. Syst., 2001, 37, (2),pp. 643–653

5 Sun, Y., Liu, Z., Todorovic, S., Li, J.: ‘Adaptive boosting for SARautomatic target recognition’, IEEE Trans. Aerosp. Electron. Syst.,2007, 43, (1), pp. 112–125

6 Juan, W., Lijie, S.: ‘Research on supervised manifold learning for SARtarget classification’. IEEE Int. Conf. Computational Intelligence forMeasurement Systems and Applications, 2009 (CIMSA’09), 2009,pp. 140–142

7 Baraniuk, R., Steeghs, P.: ‘Compressive radar imaging’. Proc. IEEERadar Conf., Waltham, MA, April 2007, pp. 128–133

8 Thiagarajan, J.J., Ramamurthy, K.N., Knee, P., Spanias, A., Berisha, V.:‘Sparse representation for automatic target classification in SARimages’. 2010 Fourth Int. Six Symp. ISCCSP, May 2010

9 Donoho, D.: ‘Compressed sensing’, IEEE Trans. Inf. Theory, 2006, 52,(4), pp. 5406–5425

10 Candès, E.J., Wakin, M.B.: ‘An introduction to compressive sampling’,IEEE Signal Process. Mag., 2008, 25, (2), pp. 21–30

IET Radar Sonar Navig., 2013, Vol. 7, Iss. 7, pp. 755–763doi: 10.1049/iet-rsn.2012.0034

11 Yoon, Y.-S., Amin, M.G.: ‘Through-the-wall radar imaging usingcompressive sensing along temporal frequency domain’. IEEE Int.Conf. Acoustics Speech and Signal Processing (ICASSP), 2010,14–19 March 2010, pp. 2806–2809

12 Xiaochun, X., Yunhua, Z.: ‘Fast compressive sensing radar imagingbased on smoothed L0 norm’. Second Asian-Pacific Conf. SyntheticAperture Radar, 2009, APSAR 2009, 26–30 October 2009, pp. 443–446

13 Shastry, M.C., Narayanan, R.M., Rangaswamy, M.: ‘Compressive radarimaging using white stochastic waveforms’. Int. Waveform Diversityand Design Conf. (WDD), 8–13 August 2010, pp. 000090–000094

14 Tello Alonso, M., López-Dekker, P., Mallorquí, J.J.: ‘A novel strategyfor radar imaging based on compressive sensing’, EEE Trans. Geosci.Remote Sens., 2010, 48, (12), pp. 4285–4295

15 Wang, M., Yang, S., Wan, Y., Wang, J.: ‘High resolution radar imagingbased on compressed sensing and fast Bayesian matching pursuit’. 2011Int. Workshop on Multi-Platform/Multi-Sensor Remote Sensing andMapping (M2RSM), 10–12 January 2011, pp. 1–5

16 Anitori, L., Otten, M., Hoogeboom, P.: ‘Compressive sensing for highresolution radar imaging’. Proc. Asia-Pacific Microwave Conf.(APMC), 7–10 December 2010, pp. 1809–1812

17 Hao, X., Xuezhi, H., Zhiping, Y., Dongjin, W., Weidong, C.:‘Compressive sensing MIMO radar imaging based on inversescattering model’. IEEE 10th Int. Conf. Signal Processing (ICSP), 24–28 October 2010, pp. 1999–2002

18 Tello, M., Lopez-Dekker, P., Mallorqui, J.J.: ‘A novel strategy for radarimaging based on compressive sensing’. IEEE Int. Geoscience andRemote Sensing Symposium, IGARSS 2008, 7–11 July 2008,pp. II-213–II-216

19 Zhang, L., Xing, M., Qiu, C.-W., et al.: ‘Resolution enhancement forinversed synthetic aperture radar imaging under low SNR viaimproved compressive sensing’, IEEE Trans. Geosci. Remote Sens.,2010, 48, (10), pp. 3824–3838

20 Michael, L., Christian, D., Zoubir, A.M.: ‘Compressive sensing inthrough-the-wall radar imaging’. IEEE Int. Conf. Acoustics, Speechand Signal Processing (ICASSP), 22–27 May 2011, pp. 4008–4011

21 Xie, X.-C.: ‘Real-time measurement in compressive radar imaging basedon AIC’. IEEE 10th Int. Conf. Signal Processing (ICSP), 24–28 October2010, pp. 2113–2116

22 Wright, J., Ganesh, A., Yang, A.Y., Ma, Y.: ‘Robust face recognition viasparse representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2008,31, (2), pp. 210–227

23 Smith, G.E., Diethe, T., Hussain, Z., Shawe-Taylor, J., Hardoon, D.R.:‘Compressed sampling for pulse Doppler radar’. IEEE Radar Conf.,10–14 May 2010, pp. 887–892

24 Tropp, J.A., Wakin, M.B., Duarte, M.F., Baron, D., Baraniuk, R.G.:‘Random filters for compressive sampling and reconstruction’. IEEEInt. Conf. Acoustics, Speech and Signal Processing, 14–19 May 2006,vol. 3, pp. III

25 Yu, Y., Wang, B., Zhang, L.: ‘Saliency-based compressive samplingfor image signals’, IEEE Signal Process. Lett., 2010, 17, (11),pp. 973–976

26 Po, D.D.-Y., Do, M.N.: ‘Directional multiscale modeling of imagesusing the contourlet transform’, IEEE Trans. Image Process., 2006,15, (6), pp. 1610–1620

27 Cristianini, N., Shawe-Taylor, J.: ‘An introduction to support vectormachines and other kernel-based learning methods’ (CambridgeUniversity Press, 2000)

28 Keydel, E.R.: ‘MSTAR extended operating conditions’, Proc. SPIE –Int. Soc. Opt. Eng., 1996, 2757, pp. 228–242

29 Cover, T.M., Hart, P.E.: ‘Nearest neighbor pattern classification’, IEEETrans. Inf. Theory, 1967, 13, (1), pp. 21–27

763& The Institution of Engineering and Technology 2013