13
Research Article Moving Object Detection Using Dynamic Motion Modelling from UAV Aerial Images A. F. M. Saifuddin Saif, 1 Anton Satria Prabuwono, 1,2 and Zainal Rasyid Mahayuddin 1 1 Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM), 43600 Bangi, Selangor Darul Ehsan, Malaysia 2 Faculty of Computing and Information Technology, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia Correspondence should be addressed to A. F. M. Saifuddin Saif; [email protected] Received 5 January 2014; Accepted 31 March 2014; Published 29 April 2014 Academic Editors: F. Di Martino and F. Yu Copyright © 2014 A. F. M. Saifuddin Saif et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. ere are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. e proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology. 1. Introduction Moving object extraction observed in a video sequence and estimation of corresponding motion trajectories for each frame are one of the typical problems of interest in computer vision. However, in real environments moving object extrac- tion becomes challenging due to unconstraint factors that is, rural or clutter environment, brightness or illumination, static or dynamic object types which together motion degen- eracy may result in worthless for moving object extraction [135]. Besides, due to rapid platform motion, image instability, the relatively small size of the object of interest signatures within the resulting imagery, depending on the flight altitude and camera orientation, appearance of the objects within the observed environment changes dramatically making the moving object detection a challenging task [2731, 3660]. Motion carries a lot of information about moving object pixels which plays important role for moving object detection as image descriptor to provide a rich description of object in different environment [11, 6165]. From physics perspective, it found its way into probability theory, and since computer vision leverages probability theory, this research finds that it is only natural to use moments to develop a dynamic motion modeling which can be applied or becomes helpful to limit segmentation tasks in computer vision to reduce computa- tional complexity. is paper presents moments based motion parameter estimation called dynamic motion model (DMM) to limit the scope of segmentation called SUED which influences overall detection performance. Based on analysis from previous moving object detection frameworks, this paper used DMM model which is embedded under frame difference based Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 890619, 12 pages http://dx.doi.org/10.1155/2014/890619

Moving Object Detection Using Dynamic Motion Modelling from UAV

Embed Size (px)

Citation preview

Page 1: Moving Object Detection Using Dynamic Motion Modelling from UAV

Research ArticleMoving Object Detection Using Dynamic MotionModelling from UAV Aerial Images

A F M Saifuddin Saif1 Anton Satria Prabuwono12

and Zainal Rasyid Mahayuddin1

1 Faculty of Information Science and Technology Universiti Kebangsaan Malaysia (UKM) 43600 BangiSelangor Darul Ehsan Malaysia

2 Faculty of Computing and Information Technology King Abdulaziz University PO Box 344 Rabigh 21911 Saudi Arabia

Correspondence should be addressed to A F M Saifuddin Saif rashedcse25yahoocom

Received 5 January 2014 Accepted 31 March 2014 Published 29 April 2014

Academic Editors F Di Martino and F Yu

Copyright copy 2014 A F M Saifuddin Saif et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of propermotion estimation Existing moving object detection approaches from UAV aerial images did not deal with motion based pixelintensity measurement to detect moving object robustly Besides current research on moving object detection from UAV aerialimages mostly depends on either frame difference or segmentation approach separately There are two main purposes for thisresearch firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposedsegmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMMmodel The proposed DMMmodel provides effective search windows based on the highest pixel intensity to segment only specificarea for moving object rather than searching the whole area of the frame using SUED At each stage of the proposed schemeexperimental fusion of the DMM and SUED produces extracted moving objects faithfully Experimental result reveals that theproposed DMM and SUED have successfully demonstrated the validity of the proposed methodology

1 Introduction

Moving object extraction observed in a video sequence andestimation of corresponding motion trajectories for eachframe are one of the typical problems of interest in computervision However in real environments moving object extrac-tion becomes challenging due to unconstraint factors thatis rural or clutter environment brightness or illuminationstatic or dynamic object types which together motion degen-eracymay result in worthless formoving object extraction [1ndash35] Besides due to rapid platform motion image instabilitythe relatively small size of the object of interest signatureswithin the resulting imagery depending on the flight altitudeand camera orientation appearance of the objects within theobserved environment changes dramatically making themoving object detection a challenging task [27ndash31 36ndash60]

Motion carries a lot of information about moving objectpixels which plays important role formoving object detectionas image descriptor to provide a rich description of object indifferent environment [11 61ndash65] From physics perspectiveit found its way into probability theory and since computervision leverages probability theory this research finds that itis only natural to use moments to develop a dynamic motionmodeling which can be applied or becomes helpful to limitsegmentation tasks in computer vision to reduce computa-tional complexity

This paper presents moments based motion parameterestimation called dynamicmotionmodel (DMM) to limit thescope of segmentation called SUED which influences overalldetection performance Based on analysis from previousmoving object detection frameworks this paper used DMMmodel which is embedded under frame difference based

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 890619 12 pageshttpdxdoiorg1011552014890619

2 The Scientific World Journal

Illumination compensation Parallax filtering Contextual information Long-term motion pattern

Figure 1 Existing approaches for motion based object detection

Table 1 Comparison of parameters for various motion analysis approaches

Parameters ApproachesIllumination compensation Parallel filtering Contextual information Long term motion analysis

Strong parallax situation No No No YesLevel of motion detection Low Low Low Low and highEnvironmental condition No No Yes YesLot of parameters Yes Yes Yes NoComputational complexity High High High High

segmentation approach and can handle robustness for opti-mum detection performance

2 Background Study

For accurate detection motion must be accurately detectedusing suitable methods which are affected by a number ofpractical problems such as motion change over time andunfixed direction of moving object Motion pattern analysisbefore detecting each moving object has started to get atten-tion in recent years especially for crowded scenarios whendetecting each individual is very difficult [46] Through themodelling of object motion the detection task becomes easyand thus also can handle noise

As the scenemay contain differentmotion patterns at onelocation within a period of time that is road intersectionaveraging or filtering before knowing the local structure ofmotion patterns may destroy such structure This paper pro-poses to use effective motion analysis for moving objectextraction In Figure 1 four existing approaches for motionanalysis are shown

Global illumination compensation approach works onbrightness or illumination changes Due to dependency onbrightness in real world this research does not progressso far In parallax filtering approach a scene that containsstrong parallax is still difficult for existingmethods to achievegood segmentation results [46] In contextual informationapproach contextual information has been applied toimprove detection of moving object from UAV aerial images[46 48] This method assumes low level motion detectionThe errors in low level motion segmentation under strongparallax situation are not considered in this method Forlong-term motion pattern analysis [46] approach there isscope to use context information to improve both low levelmotion segmentation and high level reacquisition even whenthere is only one single object in the scene [46] However thisapproach still needs further research to detect moving objectrobustly

Table 1 shows the comparison table for compatibility ofdifferent approach for motion based moving object detectionresearch from UAV aerial images In Table 1 low level of

motion indicates motion without parallax high level motionindicates motion with parallax Among these 4 types thefirst three are not distinctive property for moving objectbecause of environmental condition for the first types a lot ofparameter calculations for the second third and fourth typesincrease computation complexity

Indeed detection of motion and detection of object arecoupled If proper motion detection is done detection ofmoving object from UAV aerial image becomes easier Veryfew researches concentrate on adaptive robust handling ofnoise and unfixed motion change as well as unfixed movingobject direction For that reason an adaptive and dynamicmotion analysis framework is needed for better detection ofmoving object fromUAV aerial images where overall motionanalysis reduces dependency on parameter In other wordsdetection ofmotion indicates detection ofmotion pixels fromframes which can be described as some function of the imagepixel intensity Pixel intensity is nothing but the pixel colorvalue Moments are described with respect to their poweras in raised-to-the-power in mathematics Very few previousresearches used image moments to present motion analysisThus this paper proposes to use image moments beforesegmenting individual objects and to use motion pattern inturn to facilitate the detection in each frame

Moving object detection fromUAVaerial images involvesdealing with proper motion analysis Previously very fewresearchers used methods which involve effective motionanalysis In [1 16] the authors proposed Bayesian networkmethod which depends on the fixed shape object constraintand also did not overcome the problem of aspect ratio con-straint Image registration method does not suit because asthe number of motion block increases the detection ratedecreases Clustering based approach does not suit well forthe complexity of shortening environment and inconspicu-ous features of object [10 17] Scalar invariant feature trans-form (SIFT) used only key point of object and does notsuit well for noisy environment [6] Cascade classifier basedapproach uses gray scale input imageswhich are unrealistic inthe real-time object detection [9] Symmetric property basedapproach gave good result only on structural object shape[5 11] Background subtraction approach does not over-come blob-related problemsmentioned above for registration

The Scientific World Journal 3

Frame differenceSegmentation

Frame difference Segmentation

Frame difference and segmentation

16

368

472

95(

)

90

85

80

75

70

9231

8500

8000

Frame difference andsegmentation

Detection rate

Figure 2 Detection rate [1 15 36 40] and percentage of using frame difference and segmentation approach in the previous research

method [8 13 38] Shadow based approach depends on light-ening condition [2 7 59] Region based appearancematchingapproach does not give the optimum result for crowdedscenarios [44 54] Histogram oriented gradients (HOG)approach rejects objects backgrounds [36]

Among these methods only frame difference approachis involved with motion analysis although most of previousresearch does not provide proper motion estimation to han-dle six uncertainty constraint factors (UCF) [29] Frame dif-ference causes themoving object to become into pieces due toobjectrsquos color homogeneity and also grabs the motion infor-mation Frame difference based approaches were performedby registering two consecutive frames first followed by framedifference to findmoving objectsThese approaches are fasterbut usually cause a moving object to become into piecesespeciallywhen the objectrsquos color distribution is homogenousFrame difference detects the pixels with motion but cannotobtain a complete object [14 37] Using segmentation basedapproach can overcome the shortcomings of using frame dif-ference based approach Image segmentation is used to deter-mine the candidates of moving objects in each video frame[42] Image segmentation extracts a more complete shape ofthe objects and reduces computation cost for moving objectdetection from UAV aerial images [39 47] However it doesnot have the ability to distinguish moving regions from thestatic background Using frame difference and segmentationtogether can achieve optimum detection performance butresearch in [42 44] did not achieve reliable performancebecause of approximating pure plane and complexity needs tobe low Low detection rate and high computation time arethe current research problems to apply frame difference andsegmentation together [1 15 36 40] as shown in Figure 2Applying frame difference or segmentation separately doesnot include motion analysis currently which increases detec-tion rate with low computation complexity

This paper states that as frame difference cannot obtainmotion for the complete object alone and segmentation doesnot have the ability to differentiate moving regions from thebasic static region background so applying frame differenceand segmentation together is expected to give optimumdetection result with high detection speed for moving object

detection from UAV aerial images instead of applying framedifference or segmentation separately For that reason thispaper proposesmoments basedmotion analysis to apply underframe difference based segmentation approach (SUED)which ensures robustness of the proposed methodology

3 Research Methodology

Proposed moments based motion modeling is depictedin Section 31 and frame difference based segmentationapproach segmentation using edge based dilation is depictedin Section 32 Each section of methodology is proposed withnew approach to ensure robustness and accuracy of detectionapproach

31 Proposed Dynamic Motion Model (DMM) In computervision information theorymoments are the uncertaintymea-surement of the object pixels Besides an image moment isa certain particular weighted average (moment) of the imagepixelrsquos intensities or a function of suchmoments usually cho-sen to have some attractive property or interpretation Imagemoments are useful to describe objects after segmentationProperties of the image which are found via image momentsare centroid area of intensity and object orientation as shownin Figure 3

Each of these properties needs to be invariant by the fol-lowing terms translational invariant scale invariant andfinally rotation invariant Any feature point is called trans-lational invariant if it does not distinguish under differentpoints in space Or translational invariant means that aparticular translation does not change the object Scaleinvariance is a feature of objects or laws that does not change ifscales of length energy or other variables are multiplied by acommon factor The technical term for this process is knownas dilation A feature point is said to have rotation invariant ifits value does not change when arbitrary rotations are appliedto its argument

Image moments can simply be described as some func-tion of the image pixel intensity Pixel intensity is nothing butthe pixel color value Moments are described with respect

4 The Scientific World Journal

Centroid Area of intensity Object orientation

Figure 3 Properties of image moments

Rawmoments

Centralmoments

Translationinvariant

Scaleinvariant

Rotationinvariant

Zeroth momentsM00

First moments XM10

First moments YM01

First moments XYM11

Second moments XM20

Second moments YM02

Centroid coordinates m =M10

M00

n =M01

M00

Figure 4 Flow of moments calculation in the proposed research

to their power as in raised-to-the-power in mathematicsThis research calculates zerothmoment firstmoment secondmoment and so forth from rawmoments Later this researchtransformed these moments into translational scale androtation invariant This research presents the following orga-nized structure to calculate moments shown in Figure 4

311 Raw Moments Before finding central moments it isnecessary to find raw moments of 119865119863(119898 119899) in 119905 frame videosequence If119898 and 119899 are the components of the centroids rawmoments of 119865119863(119898 119899) for (119901 + 119902) can be defined as

119872119901119902 = sum

119901

sum

119902

119898119901119899119902119865119863 (119898 119899) (1)

In case of considering 119865119863(119898 119899) as a 2D continuous function(1) can be expressed as

119872119901119902 = ∬119898119901119899119902119865119863 (119898 119899) (2)

where centroid coordinates are as follows

119898 =11987210

11987200

119899 =11987201

11987200

11987200 = zeroth moment = sum

119898

sum

119899

11989801198990119865119863 (119898 119899)

= sum

119898

sum

119899

119865119863 (119898 119899)

11987210 = first moment 119883 = sum

119898

sum

119899

11989811198990119865119863 (119898 119899)

= sum

119898

sum

119899

119898119865119863 (119898 119899)

11987201 = first moment 119884 = sum

119898

sum

119899

11989801198991119865119863 (119898 119899)

= sum

119898

sum

119899

119899119865119863 (119898 119899)

11987211 = first moment 119883119884 = sum

119898

sum

119899

11989811198991119865119863 (119898 119899)

= sum

119898

sum

119899

119898119899119865119863 (119898 119899)

11987220 = second moment 119883 = sum

119898

sum

119899

11989821198990119865119863 (119898 119899)

= sum

119898

sum

119899

1198982119865119863 (119898 119899)

11987202 = second moment 119884 = sum

119898

sum

119899

11989801198992119865119863 (119898 119899)

= sum

119898

sum

119899

1198992119865119863 (119898 119899)

(3)

312 Central Moments Using centroid coordinates centralmoments for 119865119863(119898 119899) can be defined as

120583119901119902 = sum

119901

sum

119902

(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) (4)

The Scientific World Journal 5

(1) Translation Invariant To make 119865119863119889(119898 119899) translationinvariant 120583119901119902 can be defined as

(i) 120583119901119902 = ∬(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) 119889119898119889119899

(ii) 120583119901119902 = ∬

119901

sum

119903=0

(119901

119903) times 119898

119903sdot (minus119898)

(119901minus119903)

times

119902

sum

119904=0

(119902

119904) 119899119904sdot (minus119899)(119902minus119904)

119865119863119889 (119898 119899) 119889119898119889119899

(by using the formula (119886 + 119887)119896=

119896

sum

119903=0

(119896

119903) 119886119903sdot 119887(119896minus119903)

)

(iii) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

times ∬times119898119903sdot times119899119904sdot 119865119863119889 (119898 119899) 119889119898119889119899

[rearranging]

(iv) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

sdot 119872119901119902

[using (2)]

(5)

Equation (5) is central moments of 119865119863119889(119898 119899) which istranslation invariant and derived from raw moments

(2) Scale Invariant Let 1198651198631015840(119898 119899) be the scaled image of

119865119863(119898 119899) scaled by 120582 So

1198651198631015840(119898 119899) = 119865119863

1015840(119898

120582119899

120582) (6)

where

1198981015840=

119898

120582 119898 = 120582119898

1015840 119889119898 = 120582119889119898

1015840

1198991015840=

119899

120582 119899 = 120582119899

1015840 119889119899 = 120582119889119899

1015840

(7)

From (2) the following equation can be written

(i) 1205831015840

119901119902= ∬119898

119901119899119902119865119863(

119898

120582119899

120582) 119889119898119889119899

(ii) ∬(120582119898)119901(120582119899)119902119865119863(119898

1015840 1198991015840) sdot 120582119889119898

1015840sdot 1205821198891198991015840

(iii) 1205831015840

119901119902= 1205821199011205821199021205822∬(119898)

119901(119899)119902119865119863(

119898

120582119899

120582) 11988911989810158401198891198991015840

(iv) 1205831015840

119901119902= 120582119901+119902+2

120583119901119902 [Using (2)]

(8)

For 119901 = 0 119902 = 0 and assuming total area to be 1 (8)becomes

(i) 1205831015840

00= 120582212058300

(ii) 120582212058300 = 1

(iii) 1205822 = 1

radic12058300

(iv) 120582 = 12058300minus(12)

(9)

Putting 120582 into (8) becomes 120578119901119902 = (12058300)minus(12)sdot119901+119902+2

sdot 120583119901119902

(i) 120578119901119902 =1

(12058300)(119901+119902+2)2

sdot 120583119901119902 (10)

Equation (10) is scale invariant central moments of 119865119863(119898 119899)

(3) Rotation Invariant Let 1198651198631015840(119898 119899) be the new image of119865119863(119898 119899) rotated by 120579 So

1198651198631015840(119898 119899) = 119891 (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579)

(11)

Using variable transformation

1198981015840= 119898 cos 120579 + 119899 sin 120579 (12)

119898 = 1198981015840 cos 120579 minus 119899

1015840 sin 120579 (13)

119889119898 = cos 1205791198891198981015840 (14)

1198991015840= 119898 sin 120579 + 119899 cos 120579 (15)

119899 = 1198981015840 sin 120579 + 119899

1015840 cos 120579 (16)

119889119899 = cos 1205791198891198991015840 (17)

Using (2)

(i) 1205831015840

119901119902= ∬119898

1199011198991199021198651198631015840

119889(119898 119899) 119889119898119889119899

(ii) 1205831015840

119901119902= ∬119898

1199011198991199021198911015840

times (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579) 119889119898119889119899

[Using (11)]

(iii) 1205831015840

119901119902= ∬(119898

1015840 cos 120579 minus 1198991015840 sin 120579)

119901

(1198981015840 sin 120579 + 119899

1015840 cos 120579)119902

times 119865119863119889 (1198981015840 1198991015840) cos212057911988911989810158401198891198991015840

[Using (11)ndash(17)] (18)

So rotation invariant 120579 can be expressed as

120579 =1

2arctan(

212058311

12058320 minus 12058302

) (19)

6 The Scientific World Journal

(a) (b)

(c)

Figure 5 (a) Search window 1 for 119865119863(119898 119899) in 119905 time (b) Search window 2 for 119865119863(119898 119899) in 119905 time (c) Search window 3 for 119865119863(119898 119899) in 119905 time

(a) (b)

M

N

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middot middot middot

(c)

Figure 6 (a) 119868119861(119898 119899 119905) at frame 101 (b) 119868119861(119898 119899 119905) at frame 102 (c) 119868119861(119898 119899 119905) decomposition

Here 120579 or (19) is the rotation invariant central moment mea-surement of 119865119863(119898 119899)

Output of calculated moments proposed in this researchis depicted in Figures 5(a)ndash5(c) In the search windowthe higher the moments value that is first moments 119883119884more higher probability that search window contains movingobject

Before going to newly proposed SUED algorithm usingDMMmodel input frame needs to be projected on the search(black box) only to reduce the risk of extraction failureThis research emphasizes that using a search window aroundthe original object is very important It limits the scope ofsegmentation to a smaller areaThis means that the probabil-ity of the extraction is getting lost because a similar colouredobject in the background is reduced Furthermore limited

area increases the processing speed making the extractionvery fast

32 SUED Segmentation Algorithm This research used mor-phological dilation to ensure that extracted moving regioncontains moving objects The output of morphological dila-tion is an edged image First each frame is decomposed into119872 times 119873 uniform and nonoverlapped blocks with the size of119861 times 119861 as shown in Figure 6(c) Figures 6(a) and 6(b) showoriginal images at frames 101 and 102 achieved after DMMapproach

Let 119868(119909 119910 119905) be the original frame at frame 119905 in a videosequence where (119909 119910) denotes a pixel position in the originalframe and let 119868119861(119898 119899 119905) be the corresponding decomposedimage where (119898 119899) denotes block position for highest feature

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Moving Object Detection Using Dynamic Motion Modelling from UAV

2 The Scientific World Journal

Illumination compensation Parallax filtering Contextual information Long-term motion pattern

Figure 1 Existing approaches for motion based object detection

Table 1 Comparison of parameters for various motion analysis approaches

Parameters ApproachesIllumination compensation Parallel filtering Contextual information Long term motion analysis

Strong parallax situation No No No YesLevel of motion detection Low Low Low Low and highEnvironmental condition No No Yes YesLot of parameters Yes Yes Yes NoComputational complexity High High High High

segmentation approach and can handle robustness for opti-mum detection performance

2 Background Study

For accurate detection motion must be accurately detectedusing suitable methods which are affected by a number ofpractical problems such as motion change over time andunfixed direction of moving object Motion pattern analysisbefore detecting each moving object has started to get atten-tion in recent years especially for crowded scenarios whendetecting each individual is very difficult [46] Through themodelling of object motion the detection task becomes easyand thus also can handle noise

As the scenemay contain differentmotion patterns at onelocation within a period of time that is road intersectionaveraging or filtering before knowing the local structure ofmotion patterns may destroy such structure This paper pro-poses to use effective motion analysis for moving objectextraction In Figure 1 four existing approaches for motionanalysis are shown

Global illumination compensation approach works onbrightness or illumination changes Due to dependency onbrightness in real world this research does not progressso far In parallax filtering approach a scene that containsstrong parallax is still difficult for existingmethods to achievegood segmentation results [46] In contextual informationapproach contextual information has been applied toimprove detection of moving object from UAV aerial images[46 48] This method assumes low level motion detectionThe errors in low level motion segmentation under strongparallax situation are not considered in this method Forlong-term motion pattern analysis [46] approach there isscope to use context information to improve both low levelmotion segmentation and high level reacquisition even whenthere is only one single object in the scene [46] However thisapproach still needs further research to detect moving objectrobustly

Table 1 shows the comparison table for compatibility ofdifferent approach for motion based moving object detectionresearch from UAV aerial images In Table 1 low level of

motion indicates motion without parallax high level motionindicates motion with parallax Among these 4 types thefirst three are not distinctive property for moving objectbecause of environmental condition for the first types a lot ofparameter calculations for the second third and fourth typesincrease computation complexity

Indeed detection of motion and detection of object arecoupled If proper motion detection is done detection ofmoving object from UAV aerial image becomes easier Veryfew researches concentrate on adaptive robust handling ofnoise and unfixed motion change as well as unfixed movingobject direction For that reason an adaptive and dynamicmotion analysis framework is needed for better detection ofmoving object fromUAV aerial images where overall motionanalysis reduces dependency on parameter In other wordsdetection ofmotion indicates detection ofmotion pixels fromframes which can be described as some function of the imagepixel intensity Pixel intensity is nothing but the pixel colorvalue Moments are described with respect to their poweras in raised-to-the-power in mathematics Very few previousresearches used image moments to present motion analysisThus this paper proposes to use image moments beforesegmenting individual objects and to use motion pattern inturn to facilitate the detection in each frame

Moving object detection fromUAVaerial images involvesdealing with proper motion analysis Previously very fewresearchers used methods which involve effective motionanalysis In [1 16] the authors proposed Bayesian networkmethod which depends on the fixed shape object constraintand also did not overcome the problem of aspect ratio con-straint Image registration method does not suit because asthe number of motion block increases the detection ratedecreases Clustering based approach does not suit well forthe complexity of shortening environment and inconspicu-ous features of object [10 17] Scalar invariant feature trans-form (SIFT) used only key point of object and does notsuit well for noisy environment [6] Cascade classifier basedapproach uses gray scale input imageswhich are unrealistic inthe real-time object detection [9] Symmetric property basedapproach gave good result only on structural object shape[5 11] Background subtraction approach does not over-come blob-related problemsmentioned above for registration

The Scientific World Journal 3

Frame differenceSegmentation

Frame difference Segmentation

Frame difference and segmentation

16

368

472

95(

)

90

85

80

75

70

9231

8500

8000

Frame difference andsegmentation

Detection rate

Figure 2 Detection rate [1 15 36 40] and percentage of using frame difference and segmentation approach in the previous research

method [8 13 38] Shadow based approach depends on light-ening condition [2 7 59] Region based appearancematchingapproach does not give the optimum result for crowdedscenarios [44 54] Histogram oriented gradients (HOG)approach rejects objects backgrounds [36]

Among these methods only frame difference approachis involved with motion analysis although most of previousresearch does not provide proper motion estimation to han-dle six uncertainty constraint factors (UCF) [29] Frame dif-ference causes themoving object to become into pieces due toobjectrsquos color homogeneity and also grabs the motion infor-mation Frame difference based approaches were performedby registering two consecutive frames first followed by framedifference to findmoving objectsThese approaches are fasterbut usually cause a moving object to become into piecesespeciallywhen the objectrsquos color distribution is homogenousFrame difference detects the pixels with motion but cannotobtain a complete object [14 37] Using segmentation basedapproach can overcome the shortcomings of using frame dif-ference based approach Image segmentation is used to deter-mine the candidates of moving objects in each video frame[42] Image segmentation extracts a more complete shape ofthe objects and reduces computation cost for moving objectdetection from UAV aerial images [39 47] However it doesnot have the ability to distinguish moving regions from thestatic background Using frame difference and segmentationtogether can achieve optimum detection performance butresearch in [42 44] did not achieve reliable performancebecause of approximating pure plane and complexity needs tobe low Low detection rate and high computation time arethe current research problems to apply frame difference andsegmentation together [1 15 36 40] as shown in Figure 2Applying frame difference or segmentation separately doesnot include motion analysis currently which increases detec-tion rate with low computation complexity

This paper states that as frame difference cannot obtainmotion for the complete object alone and segmentation doesnot have the ability to differentiate moving regions from thebasic static region background so applying frame differenceand segmentation together is expected to give optimumdetection result with high detection speed for moving object

detection from UAV aerial images instead of applying framedifference or segmentation separately For that reason thispaper proposesmoments basedmotion analysis to apply underframe difference based segmentation approach (SUED)which ensures robustness of the proposed methodology

3 Research Methodology

Proposed moments based motion modeling is depictedin Section 31 and frame difference based segmentationapproach segmentation using edge based dilation is depictedin Section 32 Each section of methodology is proposed withnew approach to ensure robustness and accuracy of detectionapproach

31 Proposed Dynamic Motion Model (DMM) In computervision information theorymoments are the uncertaintymea-surement of the object pixels Besides an image moment isa certain particular weighted average (moment) of the imagepixelrsquos intensities or a function of suchmoments usually cho-sen to have some attractive property or interpretation Imagemoments are useful to describe objects after segmentationProperties of the image which are found via image momentsare centroid area of intensity and object orientation as shownin Figure 3

Each of these properties needs to be invariant by the fol-lowing terms translational invariant scale invariant andfinally rotation invariant Any feature point is called trans-lational invariant if it does not distinguish under differentpoints in space Or translational invariant means that aparticular translation does not change the object Scaleinvariance is a feature of objects or laws that does not change ifscales of length energy or other variables are multiplied by acommon factor The technical term for this process is knownas dilation A feature point is said to have rotation invariant ifits value does not change when arbitrary rotations are appliedto its argument

Image moments can simply be described as some func-tion of the image pixel intensity Pixel intensity is nothing butthe pixel color value Moments are described with respect

4 The Scientific World Journal

Centroid Area of intensity Object orientation

Figure 3 Properties of image moments

Rawmoments

Centralmoments

Translationinvariant

Scaleinvariant

Rotationinvariant

Zeroth momentsM00

First moments XM10

First moments YM01

First moments XYM11

Second moments XM20

Second moments YM02

Centroid coordinates m =M10

M00

n =M01

M00

Figure 4 Flow of moments calculation in the proposed research

to their power as in raised-to-the-power in mathematicsThis research calculates zerothmoment firstmoment secondmoment and so forth from rawmoments Later this researchtransformed these moments into translational scale androtation invariant This research presents the following orga-nized structure to calculate moments shown in Figure 4

311 Raw Moments Before finding central moments it isnecessary to find raw moments of 119865119863(119898 119899) in 119905 frame videosequence If119898 and 119899 are the components of the centroids rawmoments of 119865119863(119898 119899) for (119901 + 119902) can be defined as

119872119901119902 = sum

119901

sum

119902

119898119901119899119902119865119863 (119898 119899) (1)

In case of considering 119865119863(119898 119899) as a 2D continuous function(1) can be expressed as

119872119901119902 = ∬119898119901119899119902119865119863 (119898 119899) (2)

where centroid coordinates are as follows

119898 =11987210

11987200

119899 =11987201

11987200

11987200 = zeroth moment = sum

119898

sum

119899

11989801198990119865119863 (119898 119899)

= sum

119898

sum

119899

119865119863 (119898 119899)

11987210 = first moment 119883 = sum

119898

sum

119899

11989811198990119865119863 (119898 119899)

= sum

119898

sum

119899

119898119865119863 (119898 119899)

11987201 = first moment 119884 = sum

119898

sum

119899

11989801198991119865119863 (119898 119899)

= sum

119898

sum

119899

119899119865119863 (119898 119899)

11987211 = first moment 119883119884 = sum

119898

sum

119899

11989811198991119865119863 (119898 119899)

= sum

119898

sum

119899

119898119899119865119863 (119898 119899)

11987220 = second moment 119883 = sum

119898

sum

119899

11989821198990119865119863 (119898 119899)

= sum

119898

sum

119899

1198982119865119863 (119898 119899)

11987202 = second moment 119884 = sum

119898

sum

119899

11989801198992119865119863 (119898 119899)

= sum

119898

sum

119899

1198992119865119863 (119898 119899)

(3)

312 Central Moments Using centroid coordinates centralmoments for 119865119863(119898 119899) can be defined as

120583119901119902 = sum

119901

sum

119902

(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) (4)

The Scientific World Journal 5

(1) Translation Invariant To make 119865119863119889(119898 119899) translationinvariant 120583119901119902 can be defined as

(i) 120583119901119902 = ∬(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) 119889119898119889119899

(ii) 120583119901119902 = ∬

119901

sum

119903=0

(119901

119903) times 119898

119903sdot (minus119898)

(119901minus119903)

times

119902

sum

119904=0

(119902

119904) 119899119904sdot (minus119899)(119902minus119904)

119865119863119889 (119898 119899) 119889119898119889119899

(by using the formula (119886 + 119887)119896=

119896

sum

119903=0

(119896

119903) 119886119903sdot 119887(119896minus119903)

)

(iii) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

times ∬times119898119903sdot times119899119904sdot 119865119863119889 (119898 119899) 119889119898119889119899

[rearranging]

(iv) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

sdot 119872119901119902

[using (2)]

(5)

Equation (5) is central moments of 119865119863119889(119898 119899) which istranslation invariant and derived from raw moments

(2) Scale Invariant Let 1198651198631015840(119898 119899) be the scaled image of

119865119863(119898 119899) scaled by 120582 So

1198651198631015840(119898 119899) = 119865119863

1015840(119898

120582119899

120582) (6)

where

1198981015840=

119898

120582 119898 = 120582119898

1015840 119889119898 = 120582119889119898

1015840

1198991015840=

119899

120582 119899 = 120582119899

1015840 119889119899 = 120582119889119899

1015840

(7)

From (2) the following equation can be written

(i) 1205831015840

119901119902= ∬119898

119901119899119902119865119863(

119898

120582119899

120582) 119889119898119889119899

(ii) ∬(120582119898)119901(120582119899)119902119865119863(119898

1015840 1198991015840) sdot 120582119889119898

1015840sdot 1205821198891198991015840

(iii) 1205831015840

119901119902= 1205821199011205821199021205822∬(119898)

119901(119899)119902119865119863(

119898

120582119899

120582) 11988911989810158401198891198991015840

(iv) 1205831015840

119901119902= 120582119901+119902+2

120583119901119902 [Using (2)]

(8)

For 119901 = 0 119902 = 0 and assuming total area to be 1 (8)becomes

(i) 1205831015840

00= 120582212058300

(ii) 120582212058300 = 1

(iii) 1205822 = 1

radic12058300

(iv) 120582 = 12058300minus(12)

(9)

Putting 120582 into (8) becomes 120578119901119902 = (12058300)minus(12)sdot119901+119902+2

sdot 120583119901119902

(i) 120578119901119902 =1

(12058300)(119901+119902+2)2

sdot 120583119901119902 (10)

Equation (10) is scale invariant central moments of 119865119863(119898 119899)

(3) Rotation Invariant Let 1198651198631015840(119898 119899) be the new image of119865119863(119898 119899) rotated by 120579 So

1198651198631015840(119898 119899) = 119891 (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579)

(11)

Using variable transformation

1198981015840= 119898 cos 120579 + 119899 sin 120579 (12)

119898 = 1198981015840 cos 120579 minus 119899

1015840 sin 120579 (13)

119889119898 = cos 1205791198891198981015840 (14)

1198991015840= 119898 sin 120579 + 119899 cos 120579 (15)

119899 = 1198981015840 sin 120579 + 119899

1015840 cos 120579 (16)

119889119899 = cos 1205791198891198991015840 (17)

Using (2)

(i) 1205831015840

119901119902= ∬119898

1199011198991199021198651198631015840

119889(119898 119899) 119889119898119889119899

(ii) 1205831015840

119901119902= ∬119898

1199011198991199021198911015840

times (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579) 119889119898119889119899

[Using (11)]

(iii) 1205831015840

119901119902= ∬(119898

1015840 cos 120579 minus 1198991015840 sin 120579)

119901

(1198981015840 sin 120579 + 119899

1015840 cos 120579)119902

times 119865119863119889 (1198981015840 1198991015840) cos212057911988911989810158401198891198991015840

[Using (11)ndash(17)] (18)

So rotation invariant 120579 can be expressed as

120579 =1

2arctan(

212058311

12058320 minus 12058302

) (19)

6 The Scientific World Journal

(a) (b)

(c)

Figure 5 (a) Search window 1 for 119865119863(119898 119899) in 119905 time (b) Search window 2 for 119865119863(119898 119899) in 119905 time (c) Search window 3 for 119865119863(119898 119899) in 119905 time

(a) (b)

M

N

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middot middot middot

(c)

Figure 6 (a) 119868119861(119898 119899 119905) at frame 101 (b) 119868119861(119898 119899 119905) at frame 102 (c) 119868119861(119898 119899 119905) decomposition

Here 120579 or (19) is the rotation invariant central moment mea-surement of 119865119863(119898 119899)

Output of calculated moments proposed in this researchis depicted in Figures 5(a)ndash5(c) In the search windowthe higher the moments value that is first moments 119883119884more higher probability that search window contains movingobject

Before going to newly proposed SUED algorithm usingDMMmodel input frame needs to be projected on the search(black box) only to reduce the risk of extraction failureThis research emphasizes that using a search window aroundthe original object is very important It limits the scope ofsegmentation to a smaller areaThis means that the probabil-ity of the extraction is getting lost because a similar colouredobject in the background is reduced Furthermore limited

area increases the processing speed making the extractionvery fast

32 SUED Segmentation Algorithm This research used mor-phological dilation to ensure that extracted moving regioncontains moving objects The output of morphological dila-tion is an edged image First each frame is decomposed into119872 times 119873 uniform and nonoverlapped blocks with the size of119861 times 119861 as shown in Figure 6(c) Figures 6(a) and 6(b) showoriginal images at frames 101 and 102 achieved after DMMapproach

Let 119868(119909 119910 119905) be the original frame at frame 119905 in a videosequence where (119909 119910) denotes a pixel position in the originalframe and let 119868119861(119898 119899 119905) be the corresponding decomposedimage where (119898 119899) denotes block position for highest feature

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Moving Object Detection Using Dynamic Motion Modelling from UAV

The Scientific World Journal 3

Frame differenceSegmentation

Frame difference Segmentation

Frame difference and segmentation

16

368

472

95(

)

90

85

80

75

70

9231

8500

8000

Frame difference andsegmentation

Detection rate

Figure 2 Detection rate [1 15 36 40] and percentage of using frame difference and segmentation approach in the previous research

method [8 13 38] Shadow based approach depends on light-ening condition [2 7 59] Region based appearancematchingapproach does not give the optimum result for crowdedscenarios [44 54] Histogram oriented gradients (HOG)approach rejects objects backgrounds [36]

Among these methods only frame difference approachis involved with motion analysis although most of previousresearch does not provide proper motion estimation to han-dle six uncertainty constraint factors (UCF) [29] Frame dif-ference causes themoving object to become into pieces due toobjectrsquos color homogeneity and also grabs the motion infor-mation Frame difference based approaches were performedby registering two consecutive frames first followed by framedifference to findmoving objectsThese approaches are fasterbut usually cause a moving object to become into piecesespeciallywhen the objectrsquos color distribution is homogenousFrame difference detects the pixels with motion but cannotobtain a complete object [14 37] Using segmentation basedapproach can overcome the shortcomings of using frame dif-ference based approach Image segmentation is used to deter-mine the candidates of moving objects in each video frame[42] Image segmentation extracts a more complete shape ofthe objects and reduces computation cost for moving objectdetection from UAV aerial images [39 47] However it doesnot have the ability to distinguish moving regions from thestatic background Using frame difference and segmentationtogether can achieve optimum detection performance butresearch in [42 44] did not achieve reliable performancebecause of approximating pure plane and complexity needs tobe low Low detection rate and high computation time arethe current research problems to apply frame difference andsegmentation together [1 15 36 40] as shown in Figure 2Applying frame difference or segmentation separately doesnot include motion analysis currently which increases detec-tion rate with low computation complexity

This paper states that as frame difference cannot obtainmotion for the complete object alone and segmentation doesnot have the ability to differentiate moving regions from thebasic static region background so applying frame differenceand segmentation together is expected to give optimumdetection result with high detection speed for moving object

detection from UAV aerial images instead of applying framedifference or segmentation separately For that reason thispaper proposesmoments basedmotion analysis to apply underframe difference based segmentation approach (SUED)which ensures robustness of the proposed methodology

3 Research Methodology

Proposed moments based motion modeling is depictedin Section 31 and frame difference based segmentationapproach segmentation using edge based dilation is depictedin Section 32 Each section of methodology is proposed withnew approach to ensure robustness and accuracy of detectionapproach

31 Proposed Dynamic Motion Model (DMM) In computervision information theorymoments are the uncertaintymea-surement of the object pixels Besides an image moment isa certain particular weighted average (moment) of the imagepixelrsquos intensities or a function of suchmoments usually cho-sen to have some attractive property or interpretation Imagemoments are useful to describe objects after segmentationProperties of the image which are found via image momentsare centroid area of intensity and object orientation as shownin Figure 3

Each of these properties needs to be invariant by the fol-lowing terms translational invariant scale invariant andfinally rotation invariant Any feature point is called trans-lational invariant if it does not distinguish under differentpoints in space Or translational invariant means that aparticular translation does not change the object Scaleinvariance is a feature of objects or laws that does not change ifscales of length energy or other variables are multiplied by acommon factor The technical term for this process is knownas dilation A feature point is said to have rotation invariant ifits value does not change when arbitrary rotations are appliedto its argument

Image moments can simply be described as some func-tion of the image pixel intensity Pixel intensity is nothing butthe pixel color value Moments are described with respect

4 The Scientific World Journal

Centroid Area of intensity Object orientation

Figure 3 Properties of image moments

Rawmoments

Centralmoments

Translationinvariant

Scaleinvariant

Rotationinvariant

Zeroth momentsM00

First moments XM10

First moments YM01

First moments XYM11

Second moments XM20

Second moments YM02

Centroid coordinates m =M10

M00

n =M01

M00

Figure 4 Flow of moments calculation in the proposed research

to their power as in raised-to-the-power in mathematicsThis research calculates zerothmoment firstmoment secondmoment and so forth from rawmoments Later this researchtransformed these moments into translational scale androtation invariant This research presents the following orga-nized structure to calculate moments shown in Figure 4

311 Raw Moments Before finding central moments it isnecessary to find raw moments of 119865119863(119898 119899) in 119905 frame videosequence If119898 and 119899 are the components of the centroids rawmoments of 119865119863(119898 119899) for (119901 + 119902) can be defined as

119872119901119902 = sum

119901

sum

119902

119898119901119899119902119865119863 (119898 119899) (1)

In case of considering 119865119863(119898 119899) as a 2D continuous function(1) can be expressed as

119872119901119902 = ∬119898119901119899119902119865119863 (119898 119899) (2)

where centroid coordinates are as follows

119898 =11987210

11987200

119899 =11987201

11987200

11987200 = zeroth moment = sum

119898

sum

119899

11989801198990119865119863 (119898 119899)

= sum

119898

sum

119899

119865119863 (119898 119899)

11987210 = first moment 119883 = sum

119898

sum

119899

11989811198990119865119863 (119898 119899)

= sum

119898

sum

119899

119898119865119863 (119898 119899)

11987201 = first moment 119884 = sum

119898

sum

119899

11989801198991119865119863 (119898 119899)

= sum

119898

sum

119899

119899119865119863 (119898 119899)

11987211 = first moment 119883119884 = sum

119898

sum

119899

11989811198991119865119863 (119898 119899)

= sum

119898

sum

119899

119898119899119865119863 (119898 119899)

11987220 = second moment 119883 = sum

119898

sum

119899

11989821198990119865119863 (119898 119899)

= sum

119898

sum

119899

1198982119865119863 (119898 119899)

11987202 = second moment 119884 = sum

119898

sum

119899

11989801198992119865119863 (119898 119899)

= sum

119898

sum

119899

1198992119865119863 (119898 119899)

(3)

312 Central Moments Using centroid coordinates centralmoments for 119865119863(119898 119899) can be defined as

120583119901119902 = sum

119901

sum

119902

(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) (4)

The Scientific World Journal 5

(1) Translation Invariant To make 119865119863119889(119898 119899) translationinvariant 120583119901119902 can be defined as

(i) 120583119901119902 = ∬(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) 119889119898119889119899

(ii) 120583119901119902 = ∬

119901

sum

119903=0

(119901

119903) times 119898

119903sdot (minus119898)

(119901minus119903)

times

119902

sum

119904=0

(119902

119904) 119899119904sdot (minus119899)(119902minus119904)

119865119863119889 (119898 119899) 119889119898119889119899

(by using the formula (119886 + 119887)119896=

119896

sum

119903=0

(119896

119903) 119886119903sdot 119887(119896minus119903)

)

(iii) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

times ∬times119898119903sdot times119899119904sdot 119865119863119889 (119898 119899) 119889119898119889119899

[rearranging]

(iv) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

sdot 119872119901119902

[using (2)]

(5)

Equation (5) is central moments of 119865119863119889(119898 119899) which istranslation invariant and derived from raw moments

(2) Scale Invariant Let 1198651198631015840(119898 119899) be the scaled image of

119865119863(119898 119899) scaled by 120582 So

1198651198631015840(119898 119899) = 119865119863

1015840(119898

120582119899

120582) (6)

where

1198981015840=

119898

120582 119898 = 120582119898

1015840 119889119898 = 120582119889119898

1015840

1198991015840=

119899

120582 119899 = 120582119899

1015840 119889119899 = 120582119889119899

1015840

(7)

From (2) the following equation can be written

(i) 1205831015840

119901119902= ∬119898

119901119899119902119865119863(

119898

120582119899

120582) 119889119898119889119899

(ii) ∬(120582119898)119901(120582119899)119902119865119863(119898

1015840 1198991015840) sdot 120582119889119898

1015840sdot 1205821198891198991015840

(iii) 1205831015840

119901119902= 1205821199011205821199021205822∬(119898)

119901(119899)119902119865119863(

119898

120582119899

120582) 11988911989810158401198891198991015840

(iv) 1205831015840

119901119902= 120582119901+119902+2

120583119901119902 [Using (2)]

(8)

For 119901 = 0 119902 = 0 and assuming total area to be 1 (8)becomes

(i) 1205831015840

00= 120582212058300

(ii) 120582212058300 = 1

(iii) 1205822 = 1

radic12058300

(iv) 120582 = 12058300minus(12)

(9)

Putting 120582 into (8) becomes 120578119901119902 = (12058300)minus(12)sdot119901+119902+2

sdot 120583119901119902

(i) 120578119901119902 =1

(12058300)(119901+119902+2)2

sdot 120583119901119902 (10)

Equation (10) is scale invariant central moments of 119865119863(119898 119899)

(3) Rotation Invariant Let 1198651198631015840(119898 119899) be the new image of119865119863(119898 119899) rotated by 120579 So

1198651198631015840(119898 119899) = 119891 (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579)

(11)

Using variable transformation

1198981015840= 119898 cos 120579 + 119899 sin 120579 (12)

119898 = 1198981015840 cos 120579 minus 119899

1015840 sin 120579 (13)

119889119898 = cos 1205791198891198981015840 (14)

1198991015840= 119898 sin 120579 + 119899 cos 120579 (15)

119899 = 1198981015840 sin 120579 + 119899

1015840 cos 120579 (16)

119889119899 = cos 1205791198891198991015840 (17)

Using (2)

(i) 1205831015840

119901119902= ∬119898

1199011198991199021198651198631015840

119889(119898 119899) 119889119898119889119899

(ii) 1205831015840

119901119902= ∬119898

1199011198991199021198911015840

times (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579) 119889119898119889119899

[Using (11)]

(iii) 1205831015840

119901119902= ∬(119898

1015840 cos 120579 minus 1198991015840 sin 120579)

119901

(1198981015840 sin 120579 + 119899

1015840 cos 120579)119902

times 119865119863119889 (1198981015840 1198991015840) cos212057911988911989810158401198891198991015840

[Using (11)ndash(17)] (18)

So rotation invariant 120579 can be expressed as

120579 =1

2arctan(

212058311

12058320 minus 12058302

) (19)

6 The Scientific World Journal

(a) (b)

(c)

Figure 5 (a) Search window 1 for 119865119863(119898 119899) in 119905 time (b) Search window 2 for 119865119863(119898 119899) in 119905 time (c) Search window 3 for 119865119863(119898 119899) in 119905 time

(a) (b)

M

N

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middot middot middot

(c)

Figure 6 (a) 119868119861(119898 119899 119905) at frame 101 (b) 119868119861(119898 119899 119905) at frame 102 (c) 119868119861(119898 119899 119905) decomposition

Here 120579 or (19) is the rotation invariant central moment mea-surement of 119865119863(119898 119899)

Output of calculated moments proposed in this researchis depicted in Figures 5(a)ndash5(c) In the search windowthe higher the moments value that is first moments 119883119884more higher probability that search window contains movingobject

Before going to newly proposed SUED algorithm usingDMMmodel input frame needs to be projected on the search(black box) only to reduce the risk of extraction failureThis research emphasizes that using a search window aroundthe original object is very important It limits the scope ofsegmentation to a smaller areaThis means that the probabil-ity of the extraction is getting lost because a similar colouredobject in the background is reduced Furthermore limited

area increases the processing speed making the extractionvery fast

32 SUED Segmentation Algorithm This research used mor-phological dilation to ensure that extracted moving regioncontains moving objects The output of morphological dila-tion is an edged image First each frame is decomposed into119872 times 119873 uniform and nonoverlapped blocks with the size of119861 times 119861 as shown in Figure 6(c) Figures 6(a) and 6(b) showoriginal images at frames 101 and 102 achieved after DMMapproach

Let 119868(119909 119910 119905) be the original frame at frame 119905 in a videosequence where (119909 119910) denotes a pixel position in the originalframe and let 119868119861(119898 119899 119905) be the corresponding decomposedimage where (119898 119899) denotes block position for highest feature

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Moving Object Detection Using Dynamic Motion Modelling from UAV

4 The Scientific World Journal

Centroid Area of intensity Object orientation

Figure 3 Properties of image moments

Rawmoments

Centralmoments

Translationinvariant

Scaleinvariant

Rotationinvariant

Zeroth momentsM00

First moments XM10

First moments YM01

First moments XYM11

Second moments XM20

Second moments YM02

Centroid coordinates m =M10

M00

n =M01

M00

Figure 4 Flow of moments calculation in the proposed research

to their power as in raised-to-the-power in mathematicsThis research calculates zerothmoment firstmoment secondmoment and so forth from rawmoments Later this researchtransformed these moments into translational scale androtation invariant This research presents the following orga-nized structure to calculate moments shown in Figure 4

311 Raw Moments Before finding central moments it isnecessary to find raw moments of 119865119863(119898 119899) in 119905 frame videosequence If119898 and 119899 are the components of the centroids rawmoments of 119865119863(119898 119899) for (119901 + 119902) can be defined as

119872119901119902 = sum

119901

sum

119902

119898119901119899119902119865119863 (119898 119899) (1)

In case of considering 119865119863(119898 119899) as a 2D continuous function(1) can be expressed as

119872119901119902 = ∬119898119901119899119902119865119863 (119898 119899) (2)

where centroid coordinates are as follows

119898 =11987210

11987200

119899 =11987201

11987200

11987200 = zeroth moment = sum

119898

sum

119899

11989801198990119865119863 (119898 119899)

= sum

119898

sum

119899

119865119863 (119898 119899)

11987210 = first moment 119883 = sum

119898

sum

119899

11989811198990119865119863 (119898 119899)

= sum

119898

sum

119899

119898119865119863 (119898 119899)

11987201 = first moment 119884 = sum

119898

sum

119899

11989801198991119865119863 (119898 119899)

= sum

119898

sum

119899

119899119865119863 (119898 119899)

11987211 = first moment 119883119884 = sum

119898

sum

119899

11989811198991119865119863 (119898 119899)

= sum

119898

sum

119899

119898119899119865119863 (119898 119899)

11987220 = second moment 119883 = sum

119898

sum

119899

11989821198990119865119863 (119898 119899)

= sum

119898

sum

119899

1198982119865119863 (119898 119899)

11987202 = second moment 119884 = sum

119898

sum

119899

11989801198992119865119863 (119898 119899)

= sum

119898

sum

119899

1198992119865119863 (119898 119899)

(3)

312 Central Moments Using centroid coordinates centralmoments for 119865119863(119898 119899) can be defined as

120583119901119902 = sum

119901

sum

119902

(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) (4)

The Scientific World Journal 5

(1) Translation Invariant To make 119865119863119889(119898 119899) translationinvariant 120583119901119902 can be defined as

(i) 120583119901119902 = ∬(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) 119889119898119889119899

(ii) 120583119901119902 = ∬

119901

sum

119903=0

(119901

119903) times 119898

119903sdot (minus119898)

(119901minus119903)

times

119902

sum

119904=0

(119902

119904) 119899119904sdot (minus119899)(119902minus119904)

119865119863119889 (119898 119899) 119889119898119889119899

(by using the formula (119886 + 119887)119896=

119896

sum

119903=0

(119896

119903) 119886119903sdot 119887(119896minus119903)

)

(iii) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

times ∬times119898119903sdot times119899119904sdot 119865119863119889 (119898 119899) 119889119898119889119899

[rearranging]

(iv) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

sdot 119872119901119902

[using (2)]

(5)

Equation (5) is central moments of 119865119863119889(119898 119899) which istranslation invariant and derived from raw moments

(2) Scale Invariant Let 1198651198631015840(119898 119899) be the scaled image of

119865119863(119898 119899) scaled by 120582 So

1198651198631015840(119898 119899) = 119865119863

1015840(119898

120582119899

120582) (6)

where

1198981015840=

119898

120582 119898 = 120582119898

1015840 119889119898 = 120582119889119898

1015840

1198991015840=

119899

120582 119899 = 120582119899

1015840 119889119899 = 120582119889119899

1015840

(7)

From (2) the following equation can be written

(i) 1205831015840

119901119902= ∬119898

119901119899119902119865119863(

119898

120582119899

120582) 119889119898119889119899

(ii) ∬(120582119898)119901(120582119899)119902119865119863(119898

1015840 1198991015840) sdot 120582119889119898

1015840sdot 1205821198891198991015840

(iii) 1205831015840

119901119902= 1205821199011205821199021205822∬(119898)

119901(119899)119902119865119863(

119898

120582119899

120582) 11988911989810158401198891198991015840

(iv) 1205831015840

119901119902= 120582119901+119902+2

120583119901119902 [Using (2)]

(8)

For 119901 = 0 119902 = 0 and assuming total area to be 1 (8)becomes

(i) 1205831015840

00= 120582212058300

(ii) 120582212058300 = 1

(iii) 1205822 = 1

radic12058300

(iv) 120582 = 12058300minus(12)

(9)

Putting 120582 into (8) becomes 120578119901119902 = (12058300)minus(12)sdot119901+119902+2

sdot 120583119901119902

(i) 120578119901119902 =1

(12058300)(119901+119902+2)2

sdot 120583119901119902 (10)

Equation (10) is scale invariant central moments of 119865119863(119898 119899)

(3) Rotation Invariant Let 1198651198631015840(119898 119899) be the new image of119865119863(119898 119899) rotated by 120579 So

1198651198631015840(119898 119899) = 119891 (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579)

(11)

Using variable transformation

1198981015840= 119898 cos 120579 + 119899 sin 120579 (12)

119898 = 1198981015840 cos 120579 minus 119899

1015840 sin 120579 (13)

119889119898 = cos 1205791198891198981015840 (14)

1198991015840= 119898 sin 120579 + 119899 cos 120579 (15)

119899 = 1198981015840 sin 120579 + 119899

1015840 cos 120579 (16)

119889119899 = cos 1205791198891198991015840 (17)

Using (2)

(i) 1205831015840

119901119902= ∬119898

1199011198991199021198651198631015840

119889(119898 119899) 119889119898119889119899

(ii) 1205831015840

119901119902= ∬119898

1199011198991199021198911015840

times (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579) 119889119898119889119899

[Using (11)]

(iii) 1205831015840

119901119902= ∬(119898

1015840 cos 120579 minus 1198991015840 sin 120579)

119901

(1198981015840 sin 120579 + 119899

1015840 cos 120579)119902

times 119865119863119889 (1198981015840 1198991015840) cos212057911988911989810158401198891198991015840

[Using (11)ndash(17)] (18)

So rotation invariant 120579 can be expressed as

120579 =1

2arctan(

212058311

12058320 minus 12058302

) (19)

6 The Scientific World Journal

(a) (b)

(c)

Figure 5 (a) Search window 1 for 119865119863(119898 119899) in 119905 time (b) Search window 2 for 119865119863(119898 119899) in 119905 time (c) Search window 3 for 119865119863(119898 119899) in 119905 time

(a) (b)

M

N

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middot middot middot

(c)

Figure 6 (a) 119868119861(119898 119899 119905) at frame 101 (b) 119868119861(119898 119899 119905) at frame 102 (c) 119868119861(119898 119899 119905) decomposition

Here 120579 or (19) is the rotation invariant central moment mea-surement of 119865119863(119898 119899)

Output of calculated moments proposed in this researchis depicted in Figures 5(a)ndash5(c) In the search windowthe higher the moments value that is first moments 119883119884more higher probability that search window contains movingobject

Before going to newly proposed SUED algorithm usingDMMmodel input frame needs to be projected on the search(black box) only to reduce the risk of extraction failureThis research emphasizes that using a search window aroundthe original object is very important It limits the scope ofsegmentation to a smaller areaThis means that the probabil-ity of the extraction is getting lost because a similar colouredobject in the background is reduced Furthermore limited

area increases the processing speed making the extractionvery fast

32 SUED Segmentation Algorithm This research used mor-phological dilation to ensure that extracted moving regioncontains moving objects The output of morphological dila-tion is an edged image First each frame is decomposed into119872 times 119873 uniform and nonoverlapped blocks with the size of119861 times 119861 as shown in Figure 6(c) Figures 6(a) and 6(b) showoriginal images at frames 101 and 102 achieved after DMMapproach

Let 119868(119909 119910 119905) be the original frame at frame 119905 in a videosequence where (119909 119910) denotes a pixel position in the originalframe and let 119868119861(119898 119899 119905) be the corresponding decomposedimage where (119898 119899) denotes block position for highest feature

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Moving Object Detection Using Dynamic Motion Modelling from UAV

The Scientific World Journal 5

(1) Translation Invariant To make 119865119863119889(119898 119899) translationinvariant 120583119901119902 can be defined as

(i) 120583119901119902 = ∬(119898 minus 119898)119901(119899 minus 119899)

119902119865119863119889 (119898 119899) 119889119898119889119899

(ii) 120583119901119902 = ∬

119901

sum

119903=0

(119901

119903) times 119898

119903sdot (minus119898)

(119901minus119903)

times

119902

sum

119904=0

(119902

119904) 119899119904sdot (minus119899)(119902minus119904)

119865119863119889 (119898 119899) 119889119898119889119899

(by using the formula (119886 + 119887)119896=

119896

sum

119903=0

(119896

119903) 119886119903sdot 119887(119896minus119903)

)

(iii) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

times ∬times119898119903sdot times119899119904sdot 119865119863119889 (119898 119899) 119889119898119889119899

[rearranging]

(iv) 120583119901119902 =

119901

sum

119903=0

119902

sum

119904=0

(119901

119903)(

119902

119904) (minus119898)

(119901minus119903)(minus119899)(119902minus119904)

sdot 119872119901119902

[using (2)]

(5)

Equation (5) is central moments of 119865119863119889(119898 119899) which istranslation invariant and derived from raw moments

(2) Scale Invariant Let 1198651198631015840(119898 119899) be the scaled image of

119865119863(119898 119899) scaled by 120582 So

1198651198631015840(119898 119899) = 119865119863

1015840(119898

120582119899

120582) (6)

where

1198981015840=

119898

120582 119898 = 120582119898

1015840 119889119898 = 120582119889119898

1015840

1198991015840=

119899

120582 119899 = 120582119899

1015840 119889119899 = 120582119889119899

1015840

(7)

From (2) the following equation can be written

(i) 1205831015840

119901119902= ∬119898

119901119899119902119865119863(

119898

120582119899

120582) 119889119898119889119899

(ii) ∬(120582119898)119901(120582119899)119902119865119863(119898

1015840 1198991015840) sdot 120582119889119898

1015840sdot 1205821198891198991015840

(iii) 1205831015840

119901119902= 1205821199011205821199021205822∬(119898)

119901(119899)119902119865119863(

119898

120582119899

120582) 11988911989810158401198891198991015840

(iv) 1205831015840

119901119902= 120582119901+119902+2

120583119901119902 [Using (2)]

(8)

For 119901 = 0 119902 = 0 and assuming total area to be 1 (8)becomes

(i) 1205831015840

00= 120582212058300

(ii) 120582212058300 = 1

(iii) 1205822 = 1

radic12058300

(iv) 120582 = 12058300minus(12)

(9)

Putting 120582 into (8) becomes 120578119901119902 = (12058300)minus(12)sdot119901+119902+2

sdot 120583119901119902

(i) 120578119901119902 =1

(12058300)(119901+119902+2)2

sdot 120583119901119902 (10)

Equation (10) is scale invariant central moments of 119865119863(119898 119899)

(3) Rotation Invariant Let 1198651198631015840(119898 119899) be the new image of119865119863(119898 119899) rotated by 120579 So

1198651198631015840(119898 119899) = 119891 (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579)

(11)

Using variable transformation

1198981015840= 119898 cos 120579 + 119899 sin 120579 (12)

119898 = 1198981015840 cos 120579 minus 119899

1015840 sin 120579 (13)

119889119898 = cos 1205791198891198981015840 (14)

1198991015840= 119898 sin 120579 + 119899 cos 120579 (15)

119899 = 1198981015840 sin 120579 + 119899

1015840 cos 120579 (16)

119889119899 = cos 1205791198891198991015840 (17)

Using (2)

(i) 1205831015840

119901119902= ∬119898

1199011198991199021198651198631015840

119889(119898 119899) 119889119898119889119899

(ii) 1205831015840

119901119902= ∬119898

1199011198991199021198911015840

times (119898 cos 120579 + 119899 sin 120579 minus119898 sin 120579 + 119899 cos 120579) 119889119898119889119899

[Using (11)]

(iii) 1205831015840

119901119902= ∬(119898

1015840 cos 120579 minus 1198991015840 sin 120579)

119901

(1198981015840 sin 120579 + 119899

1015840 cos 120579)119902

times 119865119863119889 (1198981015840 1198991015840) cos212057911988911989810158401198891198991015840

[Using (11)ndash(17)] (18)

So rotation invariant 120579 can be expressed as

120579 =1

2arctan(

212058311

12058320 minus 12058302

) (19)

6 The Scientific World Journal

(a) (b)

(c)

Figure 5 (a) Search window 1 for 119865119863(119898 119899) in 119905 time (b) Search window 2 for 119865119863(119898 119899) in 119905 time (c) Search window 3 for 119865119863(119898 119899) in 119905 time

(a) (b)

M

N

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middot middot middot

(c)

Figure 6 (a) 119868119861(119898 119899 119905) at frame 101 (b) 119868119861(119898 119899 119905) at frame 102 (c) 119868119861(119898 119899 119905) decomposition

Here 120579 or (19) is the rotation invariant central moment mea-surement of 119865119863(119898 119899)

Output of calculated moments proposed in this researchis depicted in Figures 5(a)ndash5(c) In the search windowthe higher the moments value that is first moments 119883119884more higher probability that search window contains movingobject

Before going to newly proposed SUED algorithm usingDMMmodel input frame needs to be projected on the search(black box) only to reduce the risk of extraction failureThis research emphasizes that using a search window aroundthe original object is very important It limits the scope ofsegmentation to a smaller areaThis means that the probabil-ity of the extraction is getting lost because a similar colouredobject in the background is reduced Furthermore limited

area increases the processing speed making the extractionvery fast

32 SUED Segmentation Algorithm This research used mor-phological dilation to ensure that extracted moving regioncontains moving objects The output of morphological dila-tion is an edged image First each frame is decomposed into119872 times 119873 uniform and nonoverlapped blocks with the size of119861 times 119861 as shown in Figure 6(c) Figures 6(a) and 6(b) showoriginal images at frames 101 and 102 achieved after DMMapproach

Let 119868(119909 119910 119905) be the original frame at frame 119905 in a videosequence where (119909 119910) denotes a pixel position in the originalframe and let 119868119861(119898 119899 119905) be the corresponding decomposedimage where (119898 119899) denotes block position for highest feature

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Moving Object Detection Using Dynamic Motion Modelling from UAV

6 The Scientific World Journal

(a) (b)

(c)

Figure 5 (a) Search window 1 for 119865119863(119898 119899) in 119905 time (b) Search window 2 for 119865119863(119898 119899) in 119905 time (c) Search window 3 for 119865119863(119898 119899) in 119905 time

(a) (b)

M

N

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middotmiddotmiddot

middot middot middot

(c)

Figure 6 (a) 119868119861(119898 119899 119905) at frame 101 (b) 119868119861(119898 119899 119905) at frame 102 (c) 119868119861(119898 119899 119905) decomposition

Here 120579 or (19) is the rotation invariant central moment mea-surement of 119865119863(119898 119899)

Output of calculated moments proposed in this researchis depicted in Figures 5(a)ndash5(c) In the search windowthe higher the moments value that is first moments 119883119884more higher probability that search window contains movingobject

Before going to newly proposed SUED algorithm usingDMMmodel input frame needs to be projected on the search(black box) only to reduce the risk of extraction failureThis research emphasizes that using a search window aroundthe original object is very important It limits the scope ofsegmentation to a smaller areaThis means that the probabil-ity of the extraction is getting lost because a similar colouredobject in the background is reduced Furthermore limited

area increases the processing speed making the extractionvery fast

32 SUED Segmentation Algorithm This research used mor-phological dilation to ensure that extracted moving regioncontains moving objects The output of morphological dila-tion is an edged image First each frame is decomposed into119872 times 119873 uniform and nonoverlapped blocks with the size of119861 times 119861 as shown in Figure 6(c) Figures 6(a) and 6(b) showoriginal images at frames 101 and 102 achieved after DMMapproach

Let 119868(119909 119910 119905) be the original frame at frame 119905 in a videosequence where (119909 119910) denotes a pixel position in the originalframe and let 119868119861(119898 119899 119905) be the corresponding decomposedimage where (119898 119899) denotes block position for highest feature

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Moving Object Detection Using Dynamic Motion Modelling from UAV

The Scientific World Journal 7

(a) (b)

Figure 7 (a) Difference pixel structure of 119865119863119903(119898 119899 119905) using (2) (b) Difference image 119865119863119903(119898 119899 119905) using (21)

density area in the decomposed image which ensures therobustness to noise while the feature makes each block sen-sitive to motion 119868119861(119898 119899 119905) is defined by the followingequation

119868119861 (119898 119899 119905) = mean (119898 119899 119905)

+

120572

1205732(1198731 (119898 119899 119905) minus 119873minus1 (119898 119899 119905))

densed feature

(20)

where (119898 119899) is feature densed block 120572 is the random con-stant smaller than 1 mean(119898 119899 119905) is mean gray level of allpixels within the block (119898 119899) at frame 119905 1198731(119898 119899 119905) is thenumber of pixels with gray levels greater than mean(119898 119899 119905)119873minus1(119898 119899 119905) is the number of pixels with gray levels smallerthan mean(119898 119899 119905) From (20) difference image 119865119863(119898 119899 119905)

of two consecutive block images is obtained by

119865119863119903 (119898 119899 119905) = round(

1003816100381610038161003816119868119861 (119898 119899 119905) minus 119868119861 (119898 119899 119905 minus 1)1003816100381610038161003816

119865119863max (119905) 256)

(21)

where 119865119863(119898 119899 119905) is quantized image after rounded opera-tion 119865119863max is maximum value in 119865119863(119898 119899 119905) Using (21) theresultant difference is shown in Figures 7(a) and 7(b)

119865119863119903(119898 119899 119905) is filtered by 3 times 3median filter119865119863119891(119898 119899 119905)

is obtained by following formula

119865119863119891 (119898 119899 119905) = 1 119865119863 (119898 119899 119905) ge 119879 (119905)

0 otherwise(22)

where 119879(119905) = (Mean of all blocks in 119865119863119903(119898 119899 119905) at time 119905) +Positive weighting parameter lowast (Largest peak of histogram of119865119863119891(119898 119899 119905)-Largest peak of histogram of 119865119863119903(119898 119899 119905))Binary image 119865119863119887(119898 119899 119905) is obtained by the following con-dition depicted in Figure 4(a)

If 119865119863119891 (119898 119899 119905) = 1

Then 119865119863119887 (119898 119899 119905) larr997888 119865119863119891 (119898 119899 119905)

Otherwise 119865119863119891 (119898 119899 119905) = 0

(23)

119865119863119887(119898 119899 119905) may have discontinuous boundaries and holesTo ensure that 119865119863119887(119898 119899 119905) contains moving object edgebased morphological dilation is proposed here 119865119863119890(119898 119899 119905)

represents the edge image Edge(119909) that can be obtained bygradient operator like Sobel operator 119865119863119889(119898 119899 119905) is the edge

based morphological dilation of the image 119865119863119890(119898 119899 119905) isobtained by the following formula

119865119863119889 (119898 119899 119905)

= 119865119863119890 (119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890 (119898 119899 119905)

119895 isin 119871 Edge (119909) = 0

(24)

where 119871 is the structuring element that contains elements 119894 119895Equation (24) is considered as edge based dilation that is

expected to avoid undesired regions to be integrated in theresult according to edge based morphological dilation char-acteristics as shown in Figure 8(d) Here 119909 | 119909 = 119894 + 119895 119894 isin

119865119863119890(119898 119899 119905) is the conventional dilation after obtaining119865119863119890(119898 119899 119905) which can be defined from 119865119863119887(119898 119899 119905) as119865119863119887(119898 119899 119905)oplus119871 = 119909 | 119909 = 119894+119895 119894 isin 119865119863119887(119898 119899 119905) 119895 isin 119871 shownin Figure 8(b)

This research proposed the following segmentationapproach segmentation using edged based dilation (SUED)algorithm for moving object extraction from UAV aerialimages after extracting frame from DMM approach

(1) Start(2) 119865119863119903(119898 119899 119905) larr Decomposed image between 2

frames 119868119861(119898 119899 119905) minus 119868119861(119898 119899 119905 minus 1) at 119905 and 119905 minus 1 time(3) 119865119863119891(119898 119899 119905) larr 119865119863119903(119898 119899 119905)

(4) 119865119863119887(119898 119899 119905) larr 119865119863119891(119898 119899 119905)

(5) 119865119863119890(119898 119899 119905) larr 119865119863119887(119898 119899 119905)

(6) 119865119863119889(119898 119899 119905) larr 119865119863119890(119898 119899 119905)

(7) End

4 Experiment and Discussion

For the experiment purpose this research used IMAGE PRO-CESSING LAB (IPLAB) available at httpwwwaforgenetwhich is embedded with Visual Studio 2012 using C sharpprogramming language Proposed experimental analysisevaluates dynamic motion modeling (DMM) first and thenexperimented the proposed SUED embedded with DMM toextract moving object from UAV aerial images

Let 119865119863119890(119898 119899 119905) be the labeled result of the SUEDsegmentation algorithm shown in Figure 8(e) Each regionin this image indicates coherence in intensity and motionMoving objects are discriminated from backgrounds by thefusion module that combines DMM (dynamic motion mod-ule) and SUED (segmentation using edge based dilation) Let

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Moving Object Detection Using Dynamic Motion Modelling from UAV

8 The Scientific World Journal

(a) (b)

(c) (d)

(e)Figure 8 (a) 119865119863119887(119898 119899 119905) (b) Conventional dilation 119865119863119887(119898 119899 119905) oplus 119871 = 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871 (c) Analysis of edge baseddilation from 119865119863119890(119898 119899 119905) (d) Edge based dilation 119865119863119889(119898 119899 119905) = 119865119863119890(119898 119899 119905) cup 119909 | 119909 = 119894 + 119895 119894 isin 119865119863119890(119898 119899 119905) 119895 isin 119871Edge(119909) = 0 (e)Output after SUED 119865119863119890(119898 119899 119905)

SUED(119894 119905) be a SUED region in 119865119863(119898 119899 119905) and 119860 119904(119894 119905) thecorresponding area Let 119860119888(119894 119905) be the area of union of theSUED(119894 119905) and the coverage ratio 119875(119894 119905) is defined as

119875 (119894 119905) =119860119888 (119894 119905)

119860 119904 (119894 119905) (25)

If the value of 119875(119894 119905) is greater than a given fusion threshold119879119904 then SUED(119894 119905) is considered as foreground otherwisebackground In general threshold 119879119904 varies for differentsequence In this research threshold119879119904 is always set to 099 Itis very close to 1 and does not need to be adjusted as obtainedarea can contain complete desired objectThere may be somesmaller blobs in the resulting SUED(119894 119905) Those regions withthe areas smaller than the threshold119879119904 are considered as noisyregions Figure 8(e) shows the extracted moving object as theresultant of fusion for DMM and SUED methodology

41 Datasets This research used two UAV video datasets(actions1mpg and actions2mpg) fromCenter for Research inComputer Vision (CRCV) in University of Central Florida(httpcrcvucfedudataUCF Aerial Actionphp) These video

datasets were obtained using RC controlled blimp equippedwith HD camera The collection represents a diverse poolof action features at different heights and aerial view pointsMultiple instance of each action was recorded at differentflying altitudes which ranged from 400 to 500 feet and wasperformed with different actors

42 Results This research extracted 395 frames using 1framesecond video frame rate from actions1mpg videodatasets and 529 frames using same frame rate fromactions2mpg video datasets Frame size is 355 times 216Figures 6(a) and 6(b) show two consecutive frames from(101th and 102th frames) after DMM step from actions1mpgFigures 7(a) and 7(b) show pixel structures of 119865119863119903(119898 119899 119905)Figure 8(a) shows 119865119863119887(119898 119899 119905) Figure 8(b) shows conven-tional dilation of 119865119863119887(119898 119899 119905) while Figures 8(c) and 8(d)show edge based dilation Finally Figure 8(e) shows resultantof the proposed DMM and SUED based moving objectdetection

421 DMM Evaluation Figures 1(a) 1(b) and 1(c) showthree search windows which are depicted as the black rect-

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Moving Object Detection Using Dynamic Motion Modelling from UAV

The Scientific World Journal 9

Table 2 Moments measurement from different search windows for SUED

Search window Zeroth moment First moment 119883119884 Second moment119883 Second moment 1198841 981198646 12111986411 3111986411 11611986411

2 161198647 39411986411 86911986411 295119864113 931198646 24911986411 50311986411 18411986411

0

2

4

6

8

Search Search Search

First moments XY

First moments XY

Second moments Y

Second moments Y

Second moments X

Second moments X

3D line chart for momentrsquos measurement

1window2window

3window

Figure 9 3D line chart for momentrsquos measurement

angle on selected frame in 119905 time 119865119863(119898 119899) The brighter themoments value for pixels the higher the probability thatsearch windows belong to the moving object In Table 2measurement of moments is shown

In Table 2 search window 2 shows the highest momentquantity among 3 searchwindowswhich indicates that searchwindow 2 contains the highest probability to contain movingobjects Figure 9 shows 3D line chart to depict the highestprobability of pixels intensity in search window 2 Thisresearch used search window 2 as based on the probabilityof the highest moments distribution in the search windows toextractmoving object using SUED because it limits the scopeof segmentation within smaller area reduces complexityand reduces time This means that the probability of theextraction is getting lost because a similar coloured object inthe background is reduced Furthermore limited areaincreases the processing speed by making the extractionfaster

422 SUED Evaluation This section presents some of theexperimental analysis and results for the proposed SUEDalgorithm The evaluation of the proposed approach wastested on actions1mpg and actions2mpg video analysis Inorder to evaluate SUEDalgorithm twometrics detection rate(DR) and the false alarm rate (FAR) are defined Thesemetrics are based on the following parameters

(i) True positive (TP) detected regions that correspondto moving object

(ii) False positive (FP) detected regions that do notcorrespond to a moving object

(iii) False negative (FN) moving object not detected(iv) Detection rate or precision rate DR = (TP(TP +

FN)) times 100(v) False alarm rate or recall rate FAR = (FP(TP+FP))times

100

From dataset actions1mpg this research extracts 395 framesusing 1 frame per second and from actions2mpg thisresearch extracted 529 frames using the same frame rateDetails of measurement for true positive (TP) false positive(FP) false negative (FN) detection rate (DR) and false alarmrate (FAR) are mentioned in Table 3

Detection rate increases if the number of input frames isincreased The detection rate of the given total frame for twovideo datasets is displayed in Figure 10

The false alarm rate for the given number of frame fromtwo video datasets is given in Figure 11

Recall and precision rate (RPC) characterizing the perfor-mance of the proposed research are given in Figure 12

This research measures detection and false alarm ratebased on the number of frames extracted from each videodataset input Compared with [1 15 36 40] this researchused frame difference approach where moments feature formotion modeling was not included This research achieveddetection rate of 79 (for video datasets actions2mpg) whichis a good indication to handle motion of moving objectin future research Thus the proposed DMM model ishelpful to reduce segmentation task by providing the highest

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Moving Object Detection Using Dynamic Motion Modelling from UAV

10 The Scientific World Journal

Table 3 Details of measurement of true positive (TP) false positive (FP) false negative (FN) detection rate (DR) and false alarm rate (FAR)

Datasets Number of frames True positive (TP) False positive (FP) False negative (FN) Detection rate (DR) False alarm rate (FAR)Actions1mpg 395 200 100 75 75 31Actions2mpg 529 320 113 83 79 26

15

74 79

0

20

40

60

80

100

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 10 Detection rate or precision rate

15

31 26

010203040

Number of frames

Frame number-100Frame number-395

Frame number-527

Figure 11 False alarm rate or recall rate

15

74 79

0

20

40

60

80

100

Recall rate

Recall rate 5Recall rate 32

Recall rate 26

Figure 12 RPC characterization

probability area to be segmented using the proposed SUEDinstead of segmenting the whole area for the given frame byincreasing processing speed

5 Conclusion

The primary purpose of this research is to apply momentsbased dynamic motion model under the proposed framedifference based segmentation approach which ensures thatrobust handling of motion as translation invariant scaleinvariant and rotation invariant moments value is uniqueAs computer vision leverages probability theory this researchused moments based motion analysis which provides searchwindows around the original object and limits the scopeof SEUD segmentation to a smaller area This means thatthe probability of extraction is getting lost because a similarcolored object in the background is reduced Since momentsare the unique distribution of pixel intensity so experimental

result of the proposed DMM and SUED is very promising forrobust extraction of moving object from UAV aerial imagesJudging from the previous research in computer vision fieldit is certain that the proposed research will facilitate UAVoperator or related researchers for further research or inves-tigation in areas where access is restricted or rescue areashuman or vehicle identification in specific areas crowd fluxstatistics anomaly detection and intelligent traffic manage-ment and so forth

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This research is supported by Ministry of Higher EducationMalaysia research Grant scheme of FRGS12012SG05UKM0212 and ERGS12012STG07UKM027

References

[1] H-Y Cheng C-CWeng and Y-Y Chen ldquoVehicle detection inaerial surveillance using dynamic bayesian networksrdquo IEEETransactions on Image Processing vol 21 no 4 pp 2152ndash21592012

[2] V Reilly B Solmaz and M Shah ldquoShadow casting out of plane(SCOOP) candidates for human and vehicle detection in aerialimageryrdquo International Journal of Computer Vision vol 101 no2 pp 350ndash366 2013

[3] Y Yingqian L Fuqiang W Ping L Pingting and L XiaofengldquoVehicle detection methods from an unmanned aerial vehicleplatformrdquo inProceedings of the IEEE International Conference onVehicular Electronics and Safety (ICVES rsquo12) pp 411ndash415 2012

[4] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[5] Z ZezhongW Xiaoting Z Guoqing and J Ling ldquoVehicle det-ection based on morphology from highway aerial imagesrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 5997ndash6000 2012

[6] T Moranduzzo and F Melgani ldquoA SIFT-SVM method fordetecting cars in UAV imagesrdquo in Proceedings of the IEEE Inter-national Geoscience and Remote Sensing Symposium (IGARSSrsquo12) pp 6868ndash6871 2012

[7] S A Cheraghi andUU Sheikh ldquoMoving object detection usingimage registration for a moving camera platformrdquo in Proceed-ings of the IEEE International Conference on Control SystemComputing and Engineering (ICCSCE rsquo12) pp 355ndash359 2012

[8] H Bhaskar ldquoIntegrated human target detection identificationand tracking for surveillance applicationsrdquo in Proceedings of the

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 11: Moving Object Detection Using Dynamic Motion Modelling from UAV

The Scientific World Journal 11

6th IEEE International Conference on Intelligent Systems (IS rsquo12)pp 467ndash475 2012

[9] C Long J Zhiguo Y Junli and M Yibing ldquoA coarse-to-fineapproach for vehicles detection from aerial imagesrdquo in Pro-ceedings of the International Conference on Computer Vision inRemote Sensing (CVRS rsquo12) pp 221ndash225 2012

[10] L Pingting L Fuqiang L Xiaofeng and Y Yingqian ldquoSta-tionary vehicle detection in aerial surveillance with a UAVrdquo inProceedings of the 8th International Conference on InformationScience andDigital Content Technology (ICIDT rsquo12) pp 567ndash5702012

[11] A Kembhavi D Harwood and L S Davis ldquoVehicle detectionusing partial least squaresrdquo IEEE Transactions on Pattern Anal-ysis and Machine Intelligence vol 33 no 6 pp 1250ndash1265 2011

[12] J Maier and K Ambrosch ldquoDistortion compensation formove-ment detection based on dense optical flowrdquo in Advances inVisual Computing G Bebis R Boyle B Parvin et al Eds pp168ndash179 Springer Berlin Germany 2011

[13] M Tarhan andEAltug ldquoA catadioptric and pan-tilt-zoom cam-era pair object tracking system for UAVsrdquo Journal of Intelligentand Robotic Systems Theory and Applications vol 61 no 1ndash4pp 119ndash134 2011

[14] K Bhuvaneswari and H A Rauf ldquoEdgelet based human detec-tion and tracking by combined segmentation and soft deci-sionrdquo in Proceedings of the International Conference on Con-trol Automation Communication and Energy Conservation(INCACEC rsquo09) pp 1ndash6 June 2009

[15] T P Breckon S E Barnes M L Eichner and K WahrenldquoAutonomous real-time vehicle detection from a medium-levelUAVrdquo in Proceedings of the 24th International Conference onUnmanned Air Vehicle Systems 2009

[16] Z Jiang W Ding and H Li ldquoAerial video image object detec-tion and tracing based on motion vector compensation andstatistic analysisrdquo inProceedings of the 1st Asia Pacific Conferenceon Postgraduate Research in Microelectronics and Electronics(PrimeAsia rsquo09) pp 302ndash305 November 2009

[17] Y-C Chung and Z He ldquoLow-complexity and reliable movingobjects detection and tracking for aerial video surveillance withsmall UAVsrdquo in Proceedings of the IEEE International Sympo-sium on Circuits and Systems (ISCAS rsquo07) pp 2670ndash2673 May2007

[18] D Wei G Zhenbang X Shaorong and Z Hairong ldquoReal-timevision-based object tracking from a moving platform in theairrdquo in Proceedings of the IEEERSJ International Conference onIntelligent Robots and Systems (IROS rsquo06) pp 681ndash685 October2006

[19] S Zhang ldquoObject tracking in Unmanned Aerial Vehicle (UAV)videos using a combined approachrdquo in Proceedings of theIEEE International Conference on Acoustics Speech and SignalProcessing (ICASSP rsquo05) pp 681ndash684 March 2005

[20] T Pollard and M Antone ldquoDetecting and tracking all movingobjects in wide-area aerial videordquo in Proceedings of the IEEEComputer Society Conference on Computer Vision and PatternRecognition Workshops (CVPRW rsquo12) pp 15ndash22 2012

[21] I H Mtir K Kaaniche M Chtourou and P Vasseur ldquoAerialsequence registration for vehicle detectionrdquo in Proceedings ofthe 9th International Multi-Conference on Systems Signals andDevices (SSD rsquo12) pp 1ndash6 2012

[22] S Mohamed and K Takaya ldquoMotion vector search for 25Dmodeling of moving objects in a video scenerdquo in Proceedings ofthe Canadian Conference on Electrical and Computer Engineer-ing pp 265ndash268 May 2005

[23] R Li S Yu and X Yang ldquoEfficient spatio-temporal segmen-tation for extracting moving objects in video sequencesrdquo IEEETransactions on Consumer Electronics vol 53 no 3 pp 1161ndash1167 2007

[24] Y Zhou Y Liu and J Zhang ldquoA video semantic object extrac-tion method based on motion feature and visual attentionrdquo inProceedings of the IEEE International Conference on IntelligentComputing and Intelligent Systems (ICIS rsquo10) pp 845ndash849October 2010

[25] TDuttaD PDogra andB Jana ldquoObject extraction using novelregion merging and multidimensional featuresrdquo in Proceedingsof the 4th Pacific-Rim Symposium on Image and Video Technol-ogy (PSIVT rsquo10) pp 356ndash361 November 2010

[26] J Pan S Li and Y-Q Zhang ldquoAutomatic extraction of movingobjects using multiple features and multiple framesrdquo in Pro-ceedings of the IEEE Internaitonal Symposium on Circuits andSystems (ISCAS rsquo00) pp 36ndash39 Geneva SwitzerlandMay 2000

[27] A F M S Saif A Prabuwono and Z Mahayuddin ldquoReal timevision based object detection from UAV aerial images a con-ceptual frameworkrdquo in Intelligent Robotics Systems Inspiring theNEXT K Omar M Nordin P Vadakkepat et al Eds vol 376pp 265ndash274 Springer Berlin Germany 2013

[28] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivemotion pattern analysis for machine vision based movingdetection from UAV aerial imagesrdquo in Advances in VisualInformatics H Zaman P Robinson P Olivier T Shih and SVelastin Eds pp 104ndash114 Springer 2013

[29] A FM S Saif A Prabuwono and ZMahayuddin ldquoA review ofmachine vision based on moving objects object detection fromUAV aerial imagesrdquo International Journal of Advanced Com-puter Technology vol 5 no 15 pp 57ndash72 2013

[30] A F M S Saif A Prabuwono and Z Mahayuddin ldquoAdaptivelong term motion pattern analysis for moving object detectionusing UAV aerial imagesrdquo International Journal of InformationSystem and Engineering vol 1 no 1 pp 50ndash59 2013

[31] H H Trihaminto A S Prabuwono A F M S Saif and T BAdji ldquoA Conceptual framework dynamic path planning systemfor simultaneous localization and mapping multirotor UAVrdquoAdvanced Science Letters vol 20 no 1 pp 299ndash303 2014

[32] M Sayyouri A Hmimid and H Qjidaa ldquoImproving the per-formance of image classification by Hahn moment invariantsrdquoJournal of the Optical Society of America A Optics and ImageScience and Vision vol 30 no 11 pp 2381ndash2394 2013

[33] R Ozawa and F Chaumette ldquoDynamic visual servoing withimage moments for an unmanned aerial vehicle using a virtualspring approachrdquoAdvanced Robotics vol 27 no 9 pp 683ndash6962013

[34] G Jia XWangHWei and Z Zhang ldquoModeling imagemotionin airborne three-line-array (TLA) push-broom camerasrdquo Pho-togrammetric Engineering and Remote Sensing vol 79 no 1 pp67ndash78 2013

[35] J Guo D Gu S Li and H Chang ldquoMoving object detectionusingmotion enhancement and color distribution comparisonrdquoJournal of BeijingUniversity of Aeronautics andAstronautics vol38 no 2 pp 263ndash272 2012

[36] J Gleason A V Nefian X Bouyssounousse T Fong and GBebis ldquoVehicle detection from aerial imageryrdquo in Proceedings ofthe IEEE International Conference on Robotics and Automation(ICRA rsquo11) pp 2065ndash2070 2011

[37] A Gszczak T P Breckon and J Han ldquoReal-time people andvehicle detection from UAV imageryrdquo in Intelligent Robots and

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 12: Moving Object Detection Using Dynamic Motion Modelling from UAV

12 The Scientific World Journal

Computer Vision Algorithms and Techniques Proceedings ofSPIE January 2011

[38] M Besita Augustin S Juliet and S Palanikumar ldquoMotion andfeature based person tracking in surveillance videosrdquo in Pro-ceedings of the International Conference on Emerging Trends inElectrical and Computer Technology (ICETECT rsquo11) pp 605ndash609 March 2011

[39] S Wang ldquoVehicle detection on aerial images by extractingcorner features for rotational invariant shapematchingrdquo in Pro-ceedings of the 11th IEEE International Conference on Computerand Information Technology pp 171ndash175 September 2011

[40] X Wang C Liu L Gong L Long and S Gong ldquoPedestrianrecognition based on saliency detection and Kalman filteralgorithm in aerial videordquo in Proceedings of the 7th InternationalConference on Computational Intelligence and Security (CIS rsquo11)pp 1188ndash1192 December 2011

[41] G R Rodrıguez-Canosa SThomas J del Cerro A Barrientosand B MacDonald ldquoA real-time method to detect and trackmoving objects (DATMO) from unmanned aerial vehicles(UAVs) using a single camerardquo Remote Sensing vol 4 no 4 pp1090ndash1111 2012

[42] C-H Huang Y-T Wu and J-H Kao ldquoA hybrid moving objectdetection method for aerial imagesrdquo in Advances in MultimediaInformation Processing-PCM 2010 G Qiu K Lam andH KiyaEds pp 357ndash368 Springer Berlin Germany 2010

[43] A W N Ibrahim P W Ching G L Gerald Seet W S MichaelLau andW Czajewski ldquoMoving objects detection and trackingframework for UAV-based surveillancerdquo in Proceedings of the4th Pacific-Rim Symposium on Image and Video Technology(PSIVT rsquo10) pp 456ndash461 November 2010

[44] O Oreifej R Mehran and M Shah ldquoHuman identity recog-nition in aerial imagesrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo10) pp 709ndash716 June 2010

[45] CWu X Cao R Lin and FWang ldquoRegistration-basedmovingvehicle detection for low-altitude urban traffic surveillancerdquo inProceedings of the 8th World Congress on Intelligent Control andAutomation (WCICA rsquo10) pp 373ndash378 July 2010

[46] Q Yu and G Medioni ldquoMotion pattern interpretation and det-ection for tracking moving vehicles in airborne videordquo in Pro-ceedings of the IEEE Computer Society Conference on ComputerVision andPatternRecognitionWorkshops (CVPR rsquo09) pp 2671ndash2678 June 2009

[47] M Teutsch and W Kruger ldquoSpatio-temporal fusion of objectsegmentation approaches for moving distant targetsrdquo in Pro-ceedings of the 15th International Conference on InformationFusion (FUSION rsquo12) pp 1988ndash1995 2012

[48] C Guilmart S Herbin and P Perez ldquoContext-driven movingobject detection in aerial scenes with user inputrdquo in Proceedingsof the 18th IEEE International Conference on Image Processing(ICIP rsquo11) pp 1781ndash1784 September 2011

[49] M Teutsch and W Kruger ldquoDetection segmentation andtracking of moving objects in UAV videosrdquo in Proceedings ofthe IEEE 9th International Conference on Advanced Video andSignal-Based Surveillance (AVSS rsquo12) pp 313ndash318 2012

[50] R B Rodrigues S Pellegrino and H Pistori ldquoCombining colorand haar wavelet responses for aerial image classificationrdquo inArtificial Intelligence and Soft Computing L Rutkowski MKorytkowski R Scherer R Tadeusiewicz L Zadeh and JZurada Eds pp 583ndash591 Berlin Germany 2012

[51] S G Fowers D J Lee D A Ventura and J K Archibald ldquoThenature-inspired BASIS feature descriptor for UAV imagery and

its hardware implementationrdquo IEEE Transactions on Circuitsand Systems for Video Technology vol 23 no 5 pp 756ndash7682013

[52] J Lu P Fang and Y Tian ldquoAn objects detection framework inUAV videosrdquo in Advances in Computer Science and EducationApplications M Zhou and H Tan Eds pp 113ndash119 SpringerBerlin Germany 2011

[53] A Camargo Q He and K Palaniappan ldquoPerformance evalua-tion of optimization methods for super-resolution mosaickingon UAS surveillance videosrdquo in Infrared Imaging SystemsDesign Analysis Modeling and Testing vol 8355 of Proceedingsof SPIE 2012

[54] M Teutsch W Kruger and N Heinze ldquoDetection and clas-sification of moving objects from UAVs with optical sensorsrdquoin Signal Processing Sensor Fusion and Target Recognition vol8050 of Proceedings of SPIE April 2011

[55] L L Coulter D A Stow and Y H Tsai ldquoAutomated detectionof people and vehicles in natural environments using hightemporal resolution airborne remote sensingrdquo in Proceedings ofthe ASPRS Annual Conference pp 78ndash90 2012

[56] LWangH Zhao S Guo YMai and S Liu ldquoThe adaptive com-pensation algorithm for small UAV image stabilizationrdquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo12) pp 4391ndash4394 2012

[57] M Bhaskaranand and J D Gibson ldquoLow-complexity videoencoding for UAV reconnaissance and surveillancerdquo in Pro-ceedings of the IEEEMilitary Communications Conference (MIL-COM rsquo11) pp 1633ndash1638 November 2011

[58] X Tan X Yu J Liu and W Huang ldquoObject tracking based onunmanned aerial vehicle videordquo in Applied Informatics andCommunication D Zeng Ed pp 432ndash438 Springer BerlinGermany 2011

[59] Z Lin C Ren N Yao and F Xie ldquoA shadow compensationmethod for aerial imagerdquo Geomatics and Information Science ofWuhan University vol 38 no 4 pp 431ndash435 2013

[60] K Imamura N Kubo and H Hashimoto ldquoAutomatic movingobject extraction using X-means clusteringrdquo in Proceedings ofthe 28th Picture Coding Symposium (PCS rsquo10) pp 246ndash249December 2010

[61] N Suo and X Qian ldquoA comparison of features extractionmethod for HMM-based motion recognitionrdquo in Proceedings ofthe 2nd Conference on Environmental Science and InformationApplication Technology (ESIAT rsquo10) pp 636ndash639 July 2010

[62] Z Shoujuan andZQuan ldquoNew feature extraction algorithm forsatellite image non-linear small objectsrdquo in Proceedings of theIEEE Symposium on Electrical amp Electronics Engineering(EEESYM rsquo12) pp 626ndash629 2012

[63] X F Wu X S Wang and H Z Lu ldquoMotion feature extractionfor stepped frequency radar based onHough transformrdquoRadarSonar amp Navigation IET vol 4 no 1 pp 17ndash27 2010

[64] K Huang Y Zhang and T Tan ldquoA discriminative model ofmotion and cross ratio for view-invariant action recognitionrdquoIEEE Transactions on Image Processing vol 21 no 4 pp 2187ndash2197 2012

[65] R Porter A M Fraser and D Hush ldquoWide-area motion imag-eryrdquo IEEE Signal Processing Magazine vol 27 no 5 pp 56ndash652010

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 13: Moving Object Detection Using Dynamic Motion Modelling from UAV

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014