Upload
morgan-dennis
View
218
Download
1
Embed Size (px)
Citation preview
Terrorists
Team members:Ágnes Bartha
György Kovács
Imre Hajagos
Wojciech Zyla
The Project
• What is our goal?• Who are they?• How to start?• How ro recognize face?
– Face detection– Face feature detection– Eyes, mouth, nose
related search
What is our goal?
• Find out if someone is a terrorist.
• Try to identify then even if they
are disguised• We have a problem..
Who are they? • They are who
– Blow up cars, buildings
– Kill people
– Undertake control
• Enough reason to do something
How to start?• Database
– Images of terrorist
– Training images for identification ( by computer)
• Take a picture of suspicious person
• Start to do a program that decides if someone is a terrorist
How to recognize face?
• Problems – Disguised person– Other : rotated head, glasses.
• Use some algorithms– PCA– LDA
• OpenCV– Haar object detection– AdaBoot
PCA• Principal Component Analysis
• reduce the dimensionality of the data while retaining as much as possible of the variation present in the original dataset
• implies information loss
• The best low-dimensional space can be determined by the "best„ eigenvectors of the covariance matrix
• (i.e., the eigenvectors corresponding to the "largest" eigenvalues, also called "principal components").
• PCA projects the data along the directions where the data varies the most.
Problems of Eigenface technique
• Sensitive to rotation, scale and translation.• Sensitive to lighting variations• Background interference
• Face images should be preprocessed to lessen the effects of possible variations.
• Variations such as lighting and rotation can also be taken into account during training. The training dataset may include samples with such variations.
LDA• Linear Discriminant Analysis• The objective of LDA is to perform
dimensionality reduction while preserving as much of the class discriminatory information as possible
• It seeks to find directions along which the classes are best separated.
• It does so by taking into consideration the scatter within-classes but also the scatter between-classes.
• It is also more capable of distinguishing image variation due to person identity from variation due to other sources such as illumination and expression.
• μr mean feature vector for class r.• Kr number of training samples from class r.• LDA computes a transformation that
maximizes the between-class scatter while minimizing the within-class scatter:
LDA 2.
• Limitations:– at most R−1 nonzero eigenvalues.– matrix Sw
-1 does not always exist.• need at least N +R training samples – not practical
• Use PCA to reduce dimention• When the number of training samples is large and representative for each class,
LDA outperforms PCA.
OpenCV
• Open Source Computer Vision Library– Extensive vision suport
• Convolution, thresholding, floodfills, histogramming• Pyramidal-subsampling• Learning-based vision• Feature detection
– Edge detection
– Blob finders ,. ....
– Haar cascade classifier
IplImage
OpenCV -- Haar• OpenCV has a Haar features based face
detection module.
• Uses local features such as edges and line patterns. It scans a given image at different scales as in template matching.
• Scale, translation and light invariant.
• However it is sensitive to rotation.– Rotate image and run again
Advantages of using OpenCV Haar object detection
• Face detector already implemented
• Its only argument is a xml file
• Detection at any scale
• Face detection (for videos) at 15 frames per second for 384*288 pixel images
• 90% objects detected – achievable doing 2 weeks training
Haar-Like Features
• Each Haar-like feature consists of two or three jointed “black” and “white” rectangles:
• The value of a Haar-like feature is the difference between the sum of the pixel gray level values within the black and white rectangular regions:
f(x)=Sumblack rectangle (pixel gray level) – Sumwhite rectangle (pixel gray level) • Compared with raw pixel values, Haar-like features can reduce/increase
the in-class/out-of-class variability, and thus making classification easier.
Figure 1: A set of basic Haar-like features.
Figure 2: A set of extended Haar-like features.
Haar-Like Features
• The rectangle Haar-like features can be computed rapidly using “integral image”.
• Integral image at location of x, y contains the sum of the pixel values above and left of x, y, inclusive:
• Haar features computed in constant time
• The sum of pixel values within “D”:
yyxx
yxiyxP','
)','(),(
A B
C D
P2
P3 P4
P1
P (x, y)
DCABADCBAAPPPP
DCBAPCAPBAPAP
3241
4321 ,,,
Adaboost classifier
• Selects a small number of critical visual features
• Combines a collection of weak classification functions to form a strong classifier
The first and second features selected by AdaBoost for face detection
2. Haar-Like Features (cont’d)
• For example: to detect hand, the image is scanned by a sub-window containing a Haar-like feature.
• Based on each Haar-like feature fj , a weak classifier hj(x) is defined as:
where x is a sub-window, and θ is a threshold. pj indicating the direction of the inequality sign.
Adaboost• The computation cost using Haar-like features:
Example: original image size: 320X240, sub-window size: 24X24,The total number of sub-windows with one Haar-like feature per second: (320-24)X(240-24)=63,936
Considering the scaling factor and the total number of Haar-like features, the computation cost is huge.
• AdaBoost (Adaptive Boost) is an iterative learning algorithm to construct a “strong” classifier using only a training set and a “weak” learning algorithm. A “weak” classifier with the minimum classification error is selected by the learning algorithm at each iteration.
• AdaBoost is adaptive in the sense that later classifiers are tuned up in favor of those sub-windows misclassified by previous classifiers.
Adaboost
• The algorithm:
Adaboost starts with a uniform distribution of “weights” over training examples. The weights tell the learning algorithm the importance of the example.
Obtain a weak classifier from the weak learning algorithm, hj(x).
Increase the weights on the training examples that were misclassified.
(Repeat) At the end, carefully make a linear combination of the weak classifiers obtained at all iterations.
)()()( ,11,final xxx nnfinalfinal hhf
Adaboost
Adaboost
• Simple to implement
• But.. – Suboptimal solution– Over fit in presence of noise
The Cascade of Classifiers
• A series of classifiers are applied to every sub-window.
• Increases speed
• The first classifier eliminates a large number of negative sub-windows and pass almost all positive sub-windows (high false positive rate) with very little processing.
• Subsequent layers eliminate additional negatives sub-windows (passed by the first classifier) but require more computation.
• After several stages of processing the number of negative sub-windows have been reduced radically.
The Cascade of Classifiers
• Negative samples: non-object images. Negative samples are taken from arbitrary images. These images must not contain object representations.
• Positive samples: images contain object (hand in our case). The hand in the positive samples must be marked out for classifier training.
image
detecting
face
detecting
features
creating
cropping
face
normalizing
featurevector
001. . .
010. . .
Database of terrorists
comparing vectors
results
is not in thedatabase
144x150 90x130
256x256
Eye detection with Haar
• eye_haarcascade_classifier• create a growable
sequence of eyes• detect the objects• store them in
the sequence
Thank you for your attention