Upload
buggs1152
View
224
Download
0
Embed Size (px)
Citation preview
8/12/2019 Vehicle Detection Full Doc
1/64
Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks
Abstract
We present an automatic vehicle detection system for aerial surveillance inthis paper. In this system, we escape from the stereotype and existing frameworks
of vehicle detection in aerial surveillance, which are either region based or sliding
window based. We design a pixel wise classification method for vehicle detection.
The novelty lies in the fact that, in spite of performing pixel wise classification,
relations among neighboring pixels in a region are preserved in the feature
extraction process. We consider features including vehicle colors and local
features. For vehicle color extraction, we utilize a color transform to separate
vehicle colors and nonvehicle colors effectively. For edge detection, we apply
moment preserving to ad!ust the thresholds of the "anny edge detector
automatically, which increases the adaptability and the accuracy for detection in
various aerial images. #fterward, a dynamic $ayesian network %&$'( is
constructed for the classification purpose. We convert regional local features into
)uantitative observations that can be referenced when applying pixel wise
classification via &$'. *xperiments were conducted on a wide variety of aerial
videos. The results demonstrate flexibility and good generalization abilities of the
proposed method on a challenging data set with aerial surveillance images taken at
different heights and under different camera angles.
8/12/2019 Vehicle Detection Full Doc
2/64
Introduction:
#erial surveillance has a long history in the military for observing enemy activities
and in the commercial world for monitoring resources such as forests and crops. +imilar
imaging techni)ues are used in aerial news gathering and search and rescue aerial
surveillance has been performed primarily using film or electronic framing cameras. The
ob!ective has been to gather highresolution still images of an area under surveillance that
could later be examined by human or machine analysts to derive information of interest.
"urrently, there is growing interest in using video cameras for these tasks. ideo captures
dynamic events that cannot be understood from aerial still images. It enables feedback
and triggering of actions based on dynamic events and provides crucial and timely
intelligence and understanding that is not otherwise available. ideo observations can be
used to detect and geolocate moving ob!ects in real time and to control the camera, for
example, to follow detected vehicles or constantly monitor a site. -owever, video also
brings new technical challenges. ideo cameras have lower resolution than framing
cameras. In order to get the resolution re)uired to identify ob!ects on the ground, it is
generally necessary to use a telephoto lens, with a narrow field of view. This leads to the
most serious shortcoming of video in surveillance it provides only a /soda straw0 view
of the scene. The camera must then be scanned to cover extended regions of interest. #n
observer watching this video must pay constant attention, as ob!ects of interest move
rapidly in and out of the camera field of view. The video also lacks a larger visual context
the observer has difficulty perceiving the relative locations of ob!ects seen at one point
in time to ob!ects seen moments before. In addition, geodetic coordinates for ob!ects of
interest seen in the video are not available.
1ne of the main topics in aerial image analysis is scene registration and alignment.
#nother very important topic in intelligent aerial surveillance is vehicle detection and
tracking. The challenges of vehicle detection in aerial surveillance include camera
motions such as panning, tilting, and rotation. In addition, airborne platforms at different
heights result in different sizes of target ob!ects.
8/12/2019 Vehicle Detection Full Doc
3/64
In this paper, we design a new vehicle detection framework that preserves the
advantages of the existing works and avoids their drawbacks. The framework can be
divided into the training phase and the detection phase. In the training phase, we extract
multiple features including local edge and corner features, as well as vehicle colors totrain a dynamic $ayesian network %&$'(. In the detection phase, we first perform
background color removal. #fterward, the same feature extraction procedure is
performed as in the training phase. The extracted features serve as the evidence to infer
the unknown state of the trained &$', which indicates whether a pixel belongs to a
vehicle or not. In this paper, we do not perform regionbased classification, which would
highly depend on results of color segmentation algorithms such as mean shift. There is no
need to generate multiscale sliding windows either. The distinguishing feature of the
proposed framework is that the detection task is based on pixelwise classification.
-owever, the features are extracted in a neighborhood region of each pixel. Therefore,
the extracted features comprise not only pixellevel information but also relationship
among neighboring pixels in a region. +uch design is more effective and efficient than
regionbased or multiscale sliding window detection methods.
8/12/2019 Vehicle Detection Full Doc
4/64
Literature Survey
Vehicle Detection in Aerial Images using eneric !eatures" rou#ing and $onte%t
This work introduces a new approach on automatic vehicle detection in monocularlarge scale aerial images. The extraction is based on a hierarchical model that describes
the prominent vehicle features on different levels of detail. $esides the ob!ect properties,
the model comprises also contextual knowledge, i.e., relations between a vehicle and
other ob!ects as, e.g., the pavement beside a vehicle and the sun causing a vehicle2s
shadow pro!ection. This approach neither relies on external information like digital maps
or site models, nor is it limited to vial specific vehicle models.
Models
The knowledge about vehicle which is represented by the model is
summarized in Fig.3. &ue to the use of aerial imagery, i.e. almost vertical views,
8/12/2019 Vehicle Detection Full Doc
5/64
8/12/2019 Vehicle Detection Full Doc
6/64
Extraction strategy
The extraction strategy is derived from the vehicle model and,
conse)uently, follows the two paradigms 9coarse to fine9 and 9hypothesize and
test9. It consists of following : steps; %3( "reation of
8/12/2019 Vehicle Detection Full Doc
7/64
parabolic surface. #s a side effect, both algorithms for line and surface extraction
return the contours of their bounding edges %bold black lines in Fig. 7 c(. These
contours and additional edges extracted in their neighborhood %thin white lines in
Fig. 7 c( are approximated by line segments. Then, we group the segments intorectangles and, furthermore, rectangle se)uences. Fitting the rectangle se)uences
to the dominant direction of the underlying image edges yields the final 7&
vehicle hypotheses %Fig. 7 d(.
8/12/2019 Vehicle Detection Full Doc
8/64
Hypotheses validation and selection
$efore constructing 5& models from the 7& hypotheses we check the
hypotheses for validity. *ach rectangle se)uence is evaluated regarding the
geometric criteria length, width, and length>width ratio, as well as the radiometric
criteria homogeneity of an individual rectangle and gray value constancy of
rectangles connected with a 9windshield rectangle9. The validation is based on
fuzzyset theory and ensures that only promising, non overlapping hypotheses are
selected %Fig. 7 e(.
3D model generation and verification
In order to approximate the 5& shape of a vehicle hypothesis, a particular
height profile is selected from a set of predefined profiles. 8lease note that the
hypothesis2 underlying rectangles remain unchanged, i.e., the height values of the
profile refer to the image edges perpendicular to the vehicle direction. The
selection of the profiles depends on the extracted sub structures, i.e., the shape of
the validated rectangle se)uence. We distinguish rectangle se)uences
corresponding to 5 types of vehicles; hatchback cars, saloon cars, and other
vehicles such as vans, small trucks, etc. In contrast to hatch back and saloon cars,
the derivation of an accurate height profile for the last category would re)uire a
deeper analysis of the hypotheses %e.g., for an unambiguous determination of the
vehicle orientation(. -ence, in this case, we approximate the height profile only
roughly by an elliptic arc having a constant height. offset, above the ground. #fter
creating a 5& model from the 7& hypothesis and the respective height profile we
are able to predict the boundary of a vehicle2s shadow pro!ection on the underlying
road surface. # vehicle hypothesis is !udged as verified if a dark and homogeneous
region is extracted besides the shadowed part of the vehicle and a bright and
homogeneous region besides the illuminated part, respectively %Fig. 7 f(.
8/12/2019 Vehicle Detection Full Doc
9/64
Vehicle Detection Using Normali&ed $olor and 'dge (a#
This work presents a novel vehicle detection approach for detecting vehicles from
static images using color and edges. &ifferent from traditional methods, which use
motion features to detect vehicles, this method introduces a new color transform model to
find important /vehicle color0 for )uickly locating possible vehicle candidates. +ince
vehicles have various colors under different weather and lighting conditions, seldom
works were proposed for the detection of vehicles using colors. The new color transform
model has excellent capabilities to identify vehicle pixels from background, even though
the pixels are lighted under varying illuminations. #fter finding possible vehiclecandidates, three important features, including corners, edge maps, and coefficients of
wavelet transforms, are used for constructing a cascade multichannel classifier.
#ccording to this classifier, an effective scanning can be performed to verify all possible
candidates )uickly. The scanning process can be )uickly achieved because most
background pixels are eliminated in advance by the color feature.
This work presents a novel system to detect vehicles from static images using
edges and vehicle colors. The flowchart of this system is shown in Fig. 5. #t the
beginning, a color transformation to pro!ect all the colors of input pixels on a color space
such that vehicle pixels can be easily identified from backgrounds. -ere, a $ayesian
8/12/2019 Vehicle Detection Full Doc
10/64
classifier is used for this identification. Then, each detected vehicle pixel will correspond
to a possible vehicle. +ince vehicles have different sizes and orientations, different
vehicle hypotheses are generated from each detected vehicle. For verifying each
hypothesis, we use three kinds of vehicle features to filter out all impossible vehiclecandidates. The features include edges, coefficients of wavelet transform, and corners.
?sing proper weights obtained from a set of training samples, these features can then be
combined together to form an optimal vehicle classifier. Then, desired vehicles can be
very robustly and accurately verified and detected from static images.
Detection o) Vehicles and Vehicle *ueues !or +oad (onitoring Using ,igh+esolution Aerial Images
This work introduces a new approach to automatic vehicle detection in monocular
high resolution aerial images. The extraction relies upon both local and global features of
vehicles and vehicle )ueues, respectively. To model a vehicle on local level, a 5&
wireframe representation is used that describes the prominent geometric and radiometric
features of cars including their shadow region. The model is adaptive because, during
extraction, the expected saliencies of various edge features are automatically ad!usted
depending on viewing angle, vehicle color measured from the image, and current
illumination direction. The extraction is carried out by matching this model 0topdown0
to the image and evaluating the support found in the image. 1n global level, the detailed
local description is extended by more generic knowledge about vehicles as they are often
part of vehicle queues. +uch groupings of vehicles are modeled by ribbons that exhibit
the typical symmetries and spacings of vehicles over a larger distance. @ueue extraction
includes the computation of directional edge symmetry measures resulting in a symmetry
map, in which dominant, smooth curvilinear structures are searched for. $y fusing
vehicles found using the local and the global model, the overall extraction gets more
complete and more correct.
8/12/2019 Vehicle Detection Full Doc
11/64
ehicle detection is carried out by a topdown matching algorithm. # comparison
with a grouping scheme that groups image features such as edges and homogeneous
regions into carlike structures has shown that matching the complete geometric model
topdown to the image is more robust. # reason for this is that, in general, bottomupgrouping needs reliable features as seed hypotheses which are hardly given in the case of
such small ob!ects like cars. #nother disadvantage of grouping refers to the fact that we
must constrain our detection algorithm to monocular images, since vehicles may move
within the time of two exposures. intensity at roof region
#dapt the expected saliency of the edge features depending on position,
orientation, color, and sun direction.
Aeasure features from the image; edge amplitude support of each model
edge, edge direction support of each model edge, color constancy, darkness
of shadow.
"ompute a matching score %a likelihood( by comparing measured values
with expected values.
$ased on the likelihood, decide whether the car hypothesis is accepted or
not.
8/12/2019 Vehicle Detection Full Doc
12/64
Vehicle Queue Detection
ehicle )ueue detection is based on searching for onevehicle wide ribbons
that are characterized by;
+ignificant directional symmetries of gray value edges withsymmetry maxima defining the )ueue4s center line.
Fre)uent intersections of short and perpendicularly oriented edges
with homogeneous distribution along the center line
-igh parallel edge support at both sides of the center line
+ufficient length
!ast Vehicle Detection and -racking In Aerial Image Bursts
This work present a combination of vehicle detection and tracking which is
adapted to the special restrictions given on image size and flow but nevertheless
yields reliable information about the traffic situation. "ombining a set of modified
edge filters it is possible to detect cars of different sizes and orientations with
minimum computing effort, if some a priori information about the street network is
used. The found vehicles are tracked between two consecutive images by an
algorithm using +ingular alue &ecomposition. "oncerning their distance and
correlation the features are assigned pair wise with respect to their global positioning
among each other. "hoosing only the best correlating assignments it is possible to
compute reliable values for the average velocities.
Preprocessing
To identify the active regions as well as the orientation of images among each
other they have to be georeferenced, which means their absolute geographic position
and dimension have to be defined. IA? information and a digital
terrain model the image data gets pro!ected into BeoTIFF images, which are plane
and oriented into north direction. This is useful to combine the recorded images with
8/12/2019 Vehicle Detection Full Doc
13/64
existing datasets like maps or street data. To avoid examining the whole image data,
only the street area given by a database is considered.
Detection
For providing fast detection of traffic ob!ects in the large images a set ofmodified edge filters, that represent a twodimensional car model, is used.
8/12/2019 Vehicle Detection Full Doc
14/64
orientation, ob!ects below a certain distance to each other are discarded while only the
one with the strongest intensity remains.
Vehicle Detection and -racking Using the Block (atching Algorithm
It describes an approach to vehicle detection and tracking fully based on the $lock
Aatching #lgorithm %$A#(, which is the motion estimation algorithm employed in
the A8*B compression standard. $A# partitions the current frame in small, fixed
size blocks and matches them in the previous frame in order to estimate blocks
displacement %referred to as motion vectors( between two successive frames. The
detection and tracking approach is as follows. $A# provides motion vectors, which
are then regularised using a ector Aedian Filter. #fter the regularisation step,
motion vectors are grouped based on their ad!acency and similarity, and a set of
vehicles is built per singular frame. Finally, the tracking algorithm establishes the
correspondences between the vehicles detected in each frames of the se)uence,
allowing the estimation of their tra!ectories as well as the detection of new entries and
exits. The tracking algorithm is strongly based on the $A#.
$A# consists in partitioning each frame of a given se)uence into s)uare blocks of
a fixed size %in pixel; CxC or DxD or...( and detecting blocks displacement between the
actual frame and the previous one, searching inside a given scan area. It provides a
field of displacement vectors %&F( associated with. *ach block encloses a part of the
image and is a matrix containing the grey tones of that image part. What we have to
do is estimating each block position in the previous frame in order to calculate
displacements and speeds %knowing Dt between frames(. *ach block defines in the
previous frame, a /scan area0, centered in the block center. The block is shifted pixel
bypixel inside the scan area, calculating a match measure at each shift position. The
comparison is aimed at determining the pixel set most similar to the block between its
8/12/2019 Vehicle Detection Full Doc
15/64
possible positions in the scan area. #mong these positions, the scan area subpart
defined by its center, will be the matrix with the best match measure.
Vehicle Detection and -racking )or the Urban $hallengeThe ?rban "hallenge 7==E was a race of autonomous vehicles through an urban
environment organized by the ?.+. government. &uring the competition the vehicles
encountered various typical scenarios of urban driving. -ere they had to interact with
other traffic which was human or machine driven. It describes the perception
approach taken by Team Tartan
8/12/2019 Vehicle Detection Full Doc
16/64
Vision.based detection" tracking and classi)ication o) vehicles using stable
)eatures with automatic camera calibration
# system for detection, tracking and classification of vehicles based on featurepoint tracking is presented in this chapter. #n overview of the system is shown in
Figure. Feature points are automatically detected and tracked through the video
se)uence, and features lying on the background or on shadows are removed by
background subtraction, leaving only features on the moving vehicles. These features
are then separated into two categories; stable and unstable. ?sing a plumb line
pro!ection %868(, the 5& coordinates of the stable features are computed, these stable
features are grouped together to provide a segmentation of the vehicles, and the
unstable features are then assigned to these groups. The final step involves
eliminating groups that do not appear to be vehicles, establishing correspondence
between groups detected in different image frames to achieve longterm tracking, and
classifying vehicles based upon the number of unstable features in the group. The
details of these steps are described in the following subsections.
8/12/2019 Vehicle Detection Full Doc
17/64
Vehicle Detection and -racking in $ar Video Based on (otion (odel
This work aims at realtime incar video analysis to detect and track vehicles
ahead for safety, autodriving, and target tracing. This paper describes acomprehensive approach to localize target vehicles in video under various
environmental conditions. The extracted geometry features from the video are
pro!ected onto a 3& profile continuously and are tracked constantly. We rely on
temporal information of features and their motion behaviors for vehicle identification,
which compensates for the complexity in recognizing vehicle shapes, colors, and
types. We model the motion in the field of view probabilistically according to the
scene characteristic and vehicle motion model. The -idden Aarkov Aodel is used for
separating target vehicles from background, and tracking them probabilistically
8/12/2019 Vehicle Detection Full Doc
18/64
'%isting System
-inz and $aumgartner utilized a hierarchical model that describes different levels
of details of vehicle features. There is no specific vehicle models assumed, making the
method flexible. -owever, their system would miss vehicles when the contrast is weak or
when the influences of neighboring ob!ects are present.
"heng and $utler considered multiple clues and used a mixture of experts to
merge the clues for vehicle detection in aerial images. They performed color
segmentation via meanshift algorithm and motion analysis via change detection. In
addition, they presented a trainable se)uential maximum a posterior method for
multiscale analysis and enforcement of contextual information. -owever, themotion
analysis algorithm applied in their system cannot deal with aforementioned camera
motions and complex background changes. Aoreover, in the information fusion step,
their algorithm highly depends on the color segmentation results.
6in et al. proposed a method by subtracting background colors of each frame and
then refined vehicle candidate regions by enforcing size constraints of vehicles. -owever,
they assumed too many parameters such as the largest and smallest sizes of vehicles, and
the height and the focus of the airborne camera. #ssuming these parameters as known
priors might not be realistic in real applications.
The authors proposed a movingvehicle detection method based on cascade
classifiers. # large number of positive and negative training samples need to be collected
for the training purpose. Aoreover, multiscale sliding windows are generated at the
detection stage. The main disadvantage of this method is that there are a lot of miss
detections on rotated vehicles. +uch results are not surprising from the experiences of
face detection using cascade classifiers. If only frontal faces are trained, then faces with
poses are easily missed. -owever, if faces with poses are added as positive samples, the
number of false alarms would surge.
8/12/2019 Vehicle Detection Full Doc
19/64
Disadvantage
-ierarchical model system would miss vehicles when the contrast is weak or when
the influences of neighboring ob!ects are present.
*xisting method result highly depends on the color segmentation
a lot of miss detections on rotated vehicles
a vehicle tends to be separated as many regions since car roofs and windshields
usually have different colors
high computational complexity
/ro#osed System
In this paper, we design a new vehicle detection framework that preserves the
advantages of the existing works and avoids their drawbacks. The framework can bedivided into the training phase and the detection phase. In the training phase, we extract
multiple features including local edge and corner features, as well as vehicle colors to
train a dynamic $ayesian network %&$'(. In the detection phase, we first perform
background color removal. #fterward, the same feature extraction procedure is
performed as in the training phase. The extracted features serve as the evidence to infer
the unknown state of the trained &$', which indicates whether a pixel belongs to a
vehicle or not. In this paper, we do not perform regionbased classification, which would
highly depend on results of color segmentation algorithms such as mean shift. There is no
need to generate multiscale sliding windows either. The distinguishing feature of the
proposed framework is that the detection task is based on pixel wise classification.
-owever, the features are extracted in a neighborhood region of each pixel. Therefore,
8/12/2019 Vehicle Detection Full Doc
20/64
the extracted features comprise not only pixellevel information but also relationship
among neighboring pixels in a region. +uch design is more effective and efficient than
regionbased or multi scale sliding window detection methods.
Advantage
Aore *ffective and efficient.
It does not re)uire a large amount of training samples
It increases the adaptability and the accuracy for detection in various aerial images
8/12/2019 Vehicle Detection Full Doc
21/64
,ardware re0uirements:
8rocessor ; #ny 8rocessor above == A-z.
8/12/2019 Vehicle Detection Full Doc
22/64
System Architecture
8/12/2019 Vehicle Detection Full Doc
23/64
(odules
12 !rame '%traction
32 Background color removal
42 !eature '%traction
52 $lassi)ication
62 /ost /rocessing
(odule Descri#tion
12 !rame '%traction
In module we read the input video and extract the number of frames from
that video.
ideo Frame *xtraction Frame Image
8/12/2019 Vehicle Detection Full Doc
24/64
32 Background color removal
In this module we construct the color histogram of each frame and remove
the colors that appear most fre)uently in the scene.These removed pixels do not
need to be considered in subse)uent detection processes. 8erforming background
color removal cannot only reduce false alarms but also speed up the detection
process.
Frame Image
"onstruct "olor
-istogram
8/12/2019 Vehicle Detection Full Doc
25/64
42 !eature '%traction
In this module we extract the feature from the image frame. In this module
we do the following *dge &etection, "orner &etection, color
Transformation and color classification.
52 $lassi)ication
In this module we perform pixel wise classification for vehicle detection using
&$'s. %&ynamic $ayesian 'etwork(. In the training stage, we obtain the conditional
probability tables of the &$' model via expectationmaximization algorithm by
providing the groundtruth labeling of each pixel and its corresponding observed features
from several training videos. In the detection phase, the $ayesian rule is used to obtain
the probability that a pixel belongs to a vehicle.
F
rame
Image
&
etect
the
*dge
&
etect
"
orner
Transfor
mc
olor
Frame
Image
Bet
pixel
values
#pply
&$'
&etect
vehicle
8/12/2019 Vehicle Detection Full Doc
26/64
62 /ost #rocessing
In this module we use morphological operations to enhance the detection
mask and perform connected component labeling to get the vehicle ob!ects. The
size and the aspect ratio constraints are applied again after morphological
operations in the post processing stage to eliminate ob!ects that are impossible to
be vehicles.
Data !low Diagram
&etected vehicle
#pply post
processing
Aask the detected
vehicles
&isplay the result
8/12/2019 Vehicle Detection Full Doc
27/64
UML Diagram
Use Case Diagram
ehicle
&etection
Input ideo&etected Frame
Frame*xtraction
Input ideo
&ete
cted
Fram
e
$ackgroun
d colorremoval
Feature
*xtraction
"lassificatio
n
8/12/2019 Vehicle Detection Full Doc
28/64
User
et In#ut Video
'%tract !rames
+emove
background color
'%tract !eature
$lassi)ication
/ost /rocessing
Detected Vehicle
Class Diagram
8/12/2019 Vehicle Detection Full Doc
29/64
(ain
J+tring fn
J,ideo video3
JImage frame
Jvoid readInput,ideo%(
Jvoid extractFrame%(
VehicleDetection
JImage frame
Jvoid remove$ackground%(
Jvoid extractFeature%(Jvoid classification%(
Jvoid post8rocessing%(
Jvoid store&etectedFrame%(
Activity Diagram
8/12/2019 Vehicle Detection Full Doc
30/64
Input ,ideo
*xtract Frame
8/12/2019 Vehicle Detection Full Doc
31/64
Training 8hase &etection 8hase
1 : Input Video()
2 : Extract Frame()
3 : Frames()
4 : Feature Extraction() 5 : Remove backround()
! : Features()
" : #$assi%ication()
& : 'ost 'reprocessin()
Collaboration diagram
8/12/2019 Vehicle Detection Full Doc
32/64
Training 8hase
&etection 8hase
1 : Input Video()
2 : Ext ract Frame()
3 : Frames()
4 : Feature Extraction()
5 : Remove backround()
! : Features()
" : #$assi%ication()
& : 'ost 'reprocessin()
Software Description
7ava -echnology
ava technology is both a programming language and a platform.
-he 7ava /rogramming Language
The ava programming language is a highlevel language that can be characterized
by all of the following buzzwords;
+imple
#rchitecture neutral
1b!ect oriented
8ortable
&istributed
-igh performance
8/12/2019 Vehicle Detection Full Doc
33/64
Interpreted
Aultithreaded
8/12/2019 Vehicle Detection Full Doc
34/64
any implementation of the ava A. That means that as long as a computer has a ava
A, the same program written in the ava programming language can run on Windows
7===, a +olaris workstation, or on an iAac.
-he 7ava /lat)orm
# platformis the hardware or software environment in which a program runs. We4ve
already mentioned some of the most popular platforms like Windows 7===, 6inux,
+olaris, and Aac1+. Aost platforms can be described as a combination of the operating
system and hardware. The ava platform differs from most other platforms in that it4s a
softwareonly platform that runs on top of other hardwarebased platforms.
The ava platform has two components;
TheJava Virtual Machine%ava A(
TheJava pplication !rogramming Interface%ava #8I(
Lou4ve already been introduced to the ava A. It4s the base for the ava platform and is
ported onto various hardwarebased platforms.
The ava #8I is a large collection of readymade software components that provide many
useful capabilities, such as graphical user interface %B?I( widgets. The ava #8I is
grouped into libraries of related classes and interfacesK these libraries are known as
pac"ages. The next section, What "an ava Technology &oM, highlights what
functionality some of the packages in the ava #8I provide.
The following figure depicts a program that4s running on the ava platform. #s the figure
shows, the ava #8I and the virtual machine insulate the program from the hardware.
8/12/2019 Vehicle Detection Full Doc
35/64
-,' 7AVA /LA-!9+(
'ative code is code that after you compile it, the compiled code runs on a specific
hardware platform. #s a platformindependent environment, the ava platform can be a
bit slower than native code. -owever, smart compilers, welltuned interpreters, and !ust
intime bytecode compilers can bring performance close to that of native code without
threatening portability.
#hat Can Java $echnology Do%
The most common types of programs written in the ava programming language are
appletsand applications. If you4ve surfed the Web, you4re probably already familiar with
applets. #n applet is a program that adheres to certain conventions that allow it to run
within a avaenabled browser.
-owever, the ava programming language is not !ust for writing cute, entertaining applets
for the Web. The generalpurpose, highlevel ava programming language is also a
powerful software platform. ?sing the generous #8I, you can write many types of
programs.
#n application is a standalone program that runs directly on the ava platform. # special
kind of application known as aserverserves and supports clients on a network. *xamples
of servers are Web servers, proxy servers, mail servers, and print servers. #nother
specialized program is aservlet. # servlet can almost be thought of as an applet that runs
on the server side. ava +ervlets are a popular choice for building interactive web
applications, replacing the use of "BI scripts. +ervlets are similar to applets in that they
8/12/2019 Vehicle Detection Full Doc
36/64
are runtime extensions of applications. Instead of working in browsers, though, servlets
run within ava Web servers, configuring or tailoring the server.
-ow does the #8I support all these kinds of programsM It does so with packages of
software components that provide a wide range of functionality. *very full
implementation of the ava platform gives you the following features;
The essentials; 1b!ects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
#pplets; The set of conventions used by applets.
'etworking; ?
8/12/2019 Vehicle Detection Full Doc
37/64
!IU+' 5 ; 7AVA 3 SD
9DB$
Aicrosoft 1pen &atabase "onnectivity %1&$"( is a standard programming
interface for application developers and database systems providers. $efore 1&$"
became a de factostandard for Windows programs to interface with database systems,
programmers had to use proprietary languages for each database they wanted to connect
to. 'ow, 1&$" has made the choice of the database system almost irrelevant from a
coding perspective, which is as it should be. #pplication developers have much more
important things to worryabout than the syntax thatis needed to port their program from
one database to another when business needs suddenly change.
Through the 1&$" #dministrator in "ontrol 8anel, you can specify the particular
database that is associated with a data source that an 1&$" application program is
written to use. Think of an 1&$" data source as a door with a name on it. *ach door will
lead you to a particular database. For example, the data source named +ales Figures
might be a +@6 +erver database, whereas the #ccounts 8ayable data source could refer
to an #ccess database. The physical database referred to by a data source can reside
anywhere on the 6#'.
8/12/2019 Vehicle Detection Full Doc
38/64
Windows N does not install the 1&$" system files on your system.
8/12/2019 Vehicle Detection Full Doc
39/64
7DB$
In an effort to set an independent database standard #8I for ava, +un Aicrosystems
developed ava &atabase "onnectivity, or &$". &$" offers a generic +@6 database
access mechanism that provides a consistent interface to a variety of
8/12/2019 Vehicle Detection Full Doc
40/64
The goals that were set for &$" are important. They will give you some insight as to
why certain classes and functionalities behave the way they do. The eight design goals
for &$" are as follows;
1. Q! !evel "P#
The designers felt that their main goal was to define a +@6 interface for ava.
#lthough not the lowest database interface level possible, it is at a low enough level for
higherlevel tools and #8Is to be created. "onversely, it is at a high enough level for
application programmers to use it confidently. #ttaining this goal allows for future tool
vendors to /generate0 &$" code and to hide many of &$"4s complexities from the end
user.
$. Q! %onformance
+@6 syntax varies as you move from database vendor to database vendor. In an effort to
support a wide variety of vendors, &$" will allow any )uery statement to be passed
through it to the underlying database driver. This allows the connectivity module to
handle nonstandard functionality in a manner that is suitable for its users.
5. &D'% must (e implemental on top of common data(ase interfaces
The &$" +@6 #8I must /sit0 on top of other common +@6 level #8Is. This
goal allows &$" to use existing 1&$" level drivers by the use of a software
interface. This interface would translate &$" calls to1&$" and vice versa.
). Provide a &ava interface that is consistent *ith the rest of the &ava system
$ecause of ava4s acceptance in the user community thus far, the designers feel that they
should not stray from the current design of the core ava system.
+. ,eep it simple
This goal probably appears in all software design goal listings. &$" is no exception.
+un felt that the design of &$" should be very simple, allowing for only one method of
8/12/2019 Vehicle Detection Full Doc
41/64
completing a task per mechanism. #llowing duplicate functionality only serves to
confuse the users of the #8I.
-. se strong/ static typing *herever possi(le
+trong typing allows for more error checking to be done at compile timeK also, less
errors appear at runtime.
0. ,eep the common cases simple
$ecause more often than not, the usual +@6 calls used by the programmer are simple
+*6*"T4s, I'+*
8/12/2019 Vehicle Detection Full Doc
42/64
+ource "ode *ditor
+yntax highlighting for ava, ava+cript, HA6, -TA6, "++, +8, I&6
"ustomizable fonts, colors, and keyboard shortcuts
6ive parsing and error marking
8opup avadoc for )uick access to documentation
#dvanced code completion
#utomatic indentation, which is customizable
Word matching with the same initial prefixes
'avigation of current class and commonly used features
Aacros and abbreviations
Boto declaration and Boto class
Aatching brace highlighting
ump6ist allows you to return the cursor to previous modification
UI Builder
Fully WL+IWLB designer with Test Form feature
+upport for visual and nonvisual forms
*xtensible "omponent 8alette with preinstalled +wing and #WT components
"omponent Inspector showing a component2s tree and properties
#utomatic oneway code generation, fully customizable
8/12/2019 Vehicle Detection Full Doc
43/64
+upport for #WT>+wing layout managers, draganddrop layout customization
8owerful visual editor
+upport for null layout
Inplace editing of text labels of components, such as labels, buttons, and text
fields
ava$eans support, including installing, using, and customizing properties, events,
and customizers
isual ava$ean customization ability to create forms from any ava$ean
classes
"onnecting beans using "onnection wizard
Ooom view ability
Database Su##ort
&atabase schema browsing to see the tables, views, and stored procedures defined
in a database
&atabase schema editing using wizards
&ata view to see data stored in tables
+@6 and &&6 command execution to help you write and execute more
complicated +@6 or &&6 commands
Aigration of table definitions across databases from different vendors
Works with databases, such as Ay+@6, 8ostgre+@6, 1racle, I$A &$7,
Aicrosoft +@6 +erver, 8oint$ase, +ybase, Informix, "loudscape, &erby, and
more
8/12/2019 Vehicle Detection Full Doc
44/64
The 'et$eans I&* also provides fullfeatured refactoring tools, which allow you to
rename and move classes, fields, and methods, as well as change method parameters. In
addition, you get a debugger and an #ntbased pro!ect system.
Screen Shot
8/12/2019 Vehicle Detection Full Doc
45/64
8/12/2019 Vehicle Detection Full Doc
46/64
8/12/2019 Vehicle Detection Full Doc
47/64
8/12/2019 Vehicle Detection Full Doc
48/64
8/12/2019 Vehicle Detection Full Doc
49/64
8/12/2019 Vehicle Detection Full Doc
50/64
8/12/2019 Vehicle Detection Full Doc
51/64
8/12/2019 Vehicle Detection Full Doc
52/64
Sam#le $oding
Import a!avax.swing.PK
public class Frame"apture implements "ontroller6istener
Q
8rocessor pK
1b!ect wait+ync R new 1b!ect%(K
boolean stateTransition1G R trueK
public boolean already8rnt R falseK
int tempR3K
8/12/2019 Vehicle Detection Full Doc
53/64
AainFrame mfK
Frame"apture%AainFrame m(
Q
mfRmK
S
public boolean open%Aedia6ocator ml(
Q
try
Q
p R Aanager.create8rocessor%ml(K
S
catch %*xception e(
Q
+ystem.err.println%9Failed to create a processor from the given url; 9 J e(K
return falseK
S
p.add"ontroller6istener%this(K
p.configure%(K
if %waitFor+tate%8rocessor."onfigured((
Q
+ystem.err.println%9Failed to configure the processor.9(K
return falseK
S
p.set"ontent&escriptor%null(K
Track"ontrol tcUV R p.getTrack"ontrols%(K
if %tc RR null(
Q
8/12/2019 Vehicle Detection Full Doc
54/64
+ystem.err.println%9Failed to obtain track controls from the processor.9(K
return falseK
S
Track"ontrol videoTrack R nullK
for %int i R =K i tc.lengthK iJJ(
Q
if %tcUiV.getFormat%( instanceof ideoFormat(
videoTrack R tcUiVK
else tcUiV.set*nabled%false(K
S
if %videoTrack RR null(
Q
+ystem.err.println%9The input media does not contain a video track.9(K
return falseK
S
+tring videoFormat R videoTrack.getFormat%(.to+tring%(K
&imension video+ize R parseideo+ize%videoFormat(K
try
Q
"odec codecUV R Q new 8ost#ccess"odec%video+ize(SK
videoTrack.set"odec"hain%codec(K
S
catch %*xception e(
Q
+ystem.err.println%9The process does not support effects.9(K
S
8/12/2019 Vehicle Detection Full Doc
55/64
p.prefetch%(K
if %waitFor+tate%8rocessor.8refetched((
Q
+ystem.err.println%9Failed to realise the processor.9(K
return falseK
S
p.start%(K
return trueK
S
public &imension parseideo+ize%+tring video+ize(
Q
int xR5==, yR7==K
+tringTokenizer strtok R new +tringTokenizer%video+ize, 9, 9(K
strtok.nextToken%(K
+tring size R strtok.nextToken%(K
+tringTokenizer size+trtok R new +tringTokenizer%size, 9x9(K
try
Q
x R Integer.parseInt%size+trtok.nextToken%((K
y R Integer.parseInt%size+trtok.nextToken%((K
S
catch %'umberFormat*xception e(
Q
+ystem.out.println%9unable to find video size, assuming default of 5==x7==9(K
S
return new &imension%x, y(K
S
8/12/2019 Vehicle Detection Full Doc
56/64
boolean waitFor+tate%int state(
Q
synchronized %wait+ync(
Q
try
Q
while %p.get+tate%( R state XX stateTransition1G(
wait+ync.wait%(K
S
catch %*xception e(
Q
S
S
return stateTransition1GK
S
public void controller?pdate%"ontroller*vent evt(
Q
if %evt instanceof "onfigure"omplete*vent YY evt instanceof
8/12/2019 Vehicle Detection Full Doc
57/64
synchronized %wait+ync(
Q
stateTransition1G R falseK
wait+ync.notify#ll%(K
S
S
else if %evt instanceof *nd1fAedia*vent(
Q
p.close%(K
+ystem.exit%=(K
S
S
static void pr?sage%(
Q
+ystem.err.println%9?sage; !ava Frame#ccess urlZ9(K
S
public class 8re#ccess"odec implements "odec
Q
void accessFrame%$uffer frame(
Q
long t R %long( %frame.getTime+tamp%( > 3=======f(K
if%frame.get6ength%(RR=(
Q
p.stop%(K
S
S
8/12/2019 Vehicle Detection Full Doc
58/64
protected Format supportedInsUV R new FormatUV Q new ideoFormat%null(SK
protected Format supported1utsUV R new FormatUV Q new ideoFormat%null(SK
Format input R null, output R nullK
public +tring get'ame%(
Q
return 98re#ccess "odec9K
S
public void open%( QS
public void close%( QS
public void reset%( QS
public FormatUV get+upportedInputFormats%(
Q
return supportedInsK
S
public FormatUV get+upported1utputFormats%Format in(
Q
if %in RR null(
return supported1utsK
else
Q
Format outsUV R new FormatU3VK
outsU=V R inK
return outsK
S
S
public Format setInputFormat%Format format(
Q
8/12/2019 Vehicle Detection Full Doc
59/64
input R formatK
return inputK
S
public Format set1utputFormat%Format format(
Q
output R formatK
return outputK
S
public int process%$uffer in, $uffer out(
Q
accessFrame%in(K
1b!ect data R in.get&ata%(K
in.set&ata%out.get&ata%((K
out.set&ata%data(K
out.setFlags%$uffer.F6#B['1[+L'"(K
out.setFormat%in.getFormat%((K
out.set6ength%in.get6ength%((K
out.set1ffset%in.get1ffset%((K
return $?FF*
8/12/2019 Vehicle Detection Full Doc
60/64
S
S
public class 8ost#ccess"odec extends 8re#ccess"odec
Q
public 8ost#ccess"odec%&imension size(
Q
supportedIns R new FormatUV Q new
8/12/2019 Vehicle Detection Full Doc
61/64
writer.write%outImage(K
ios.close%(K
S
catch %I1*xception e(
Q
+ystem.out.println%9*rror ;9 J e(K
S
S
long t R %long( %frame.getTime+tamp%( > 3=======f(K
if%frame.get6ength%(RR=(
Q
+ystem.out.println%9stop9(K
p.stop%(K
p.close%(K
1ption8ane.showAessage&ialog%new Frame%(,9Frame *xtracted9(K
mf.!$utton:.set*nabled%true(K
S
S
public +tring get'ame%(
Q
return 98ost#ccess "odec9K
S
private &imension sizeK
S
S
8/12/2019 Vehicle Detection Full Doc
62/64
$onclusion
In this paper, we have proposed an automatic vehicle detection system for aerial
surveillance that does not assume any prior information of camera heights, vehicle sizes,
and aspect ratios. In this system, we have not performed regionbased classification,
which would highly depend on computational intensive color segmentation algorithms
such as mean shift. We have not generated multiscale sliding windows that are not
suitable for detecting rotated vehicles either. Instead, we have proposed a pixelwise
classification method for the vehicle detection using &$'s. In spite of performing
pixelwise classification, relations among neighboring pixels in a region are preserved in
the feature extraction process. Therefore, the extracted features comprise not only pixel
level information but also regionlevel information. +ince the colors of the vehicles
would not dramatically change due to the influence of the camera angles and heights, we
use only a small number of positive and negative samples to train the +Afor vehicle
color classification. Aoreover, the number of frames re)uired to train the &$' is very
small. 1verall, the entire framework does not re)uire a large amount of training samples.We have also applied moment preserving to enhance the "anny edge detector, which
increases the adaptability and the accuracy for detection in various aerial images. The
experimental results demonstrate flexibility and good generalization abilities of the
proposed method on a challenging data set with aerial surveillance images taken at
different heights and under different camera angles. For future work, performing vehicle
tracking on the detected vehicles can further stabilize the detection results. #utomatic
vehicle detection and tracking could serve as the foundation for event analysis in
intelligent aerial surveillance systems.
8/12/2019 Vehicle Detection Full Doc
63/64
+e)erence
U3V
8/12/2019 Vehicle Detection Full Doc
64/64
UDV -. "heng and &. $utler, /+egmentation of aerial surveillance video using a mixture
of experts,0 in!roc& I''' Digit& Imaging Comput& .$ech& ppl&, 7==, p. CC.
UNV