40
SEMINAR REPORT ON 3D Face Recognition System By Sutar Jyoti Adinath DEPARTMENT OF INFORMATION TECHNOLOGY MAHARASHTRA ACADEMY OF ENGINEERING ALANDI (DEVACHI), PUNE 2010 - 2011

3D Face Recognition System

Embed Size (px)

Citation preview

Page 1: 3D Face Recognition System

SEMINAR REPORT

ON

3D Face Recognition System

By

Sutar Jyoti Adinath

DEPARTMENT OF INFORMATION TECHNOLOGY

MAHARASHTRA ACADEMY OF ENGINEERING

ALANDI (DEVACHI), PUNE

2010 - 2011

Page 2: 3D Face Recognition System

SEMINAR REPORT

ON

3D Face Recognition System

By

Sutar Jyoti Adinath

Guided by

Prof. S M Bhagat

DEPARTMENT OF INFORMATION TECHNOLOGY

MAHARASHTRA ACADEMY OF ENGINEERING

ALANDI (DEVACHI), PUNE

2010 - 2011

Page 3: 3D Face Recognition System

MAHARASHTRA ACADEMY OF ENGINEERING

ALANDI (DEVACHI), PUNE

DEPARTMENT OF INFORMATION TECHNOLOGY

CERTIFICATE

This is to certify that the seminar entitled ”3D Face Recognition System” has been

carried out by Sutar Jyoti Adinath under my guidance in partial fulfillment of Third

Year of Engineering in Information Technology of Pune University, Pune during the

academic year 2010-2011. To the best of my knowledge and belief this seminar work has

not been submitted elsewhere.

Prof. S M Bhagat Prof. S M Bhagat

Guide Head

i

Page 4: 3D Face Recognition System

Acknowledgement

I take this opportunity to thank my seminar guide and Head of the Department

Prof. S M Bhagat and Mrs. Nanda Yadav for their valuable guidance and for providing

all the necessary facilities, which were indispensable in the completion of this project. I

am also thankful to all the staff members of the Department of Information Technology of

Maharashtra Academy Of Engineering Alandi(D) Pune for their valuable time, support,

comments,suggestions and persuasion.

I would also like to thank the institute for providing the required facilities, In-

ternet access and important books.

Sutar Jyoti Adinath

ii

Page 5: 3D Face Recognition System

Abstract

Wouldn’t you love to replace password based access control to avoid having to reset for-

gotten password and worry about the integrity of your system? Wouldn’t you like to rest

secure in comfort that your healthcare system does not merely on your social security

number as proof of your identity for granting access to your medical records?

Because each of these questions is becoming more and more important, access to

a reliable personal identification is becoming increasingly essential .Conventional method

of identification based on possession of ID cards or exclusive knowledge like a social se-

curity number or a password are not all together reliable. ID cards can be lost forged

or misplaced; passwords can be forgotten or compromised. But a face is undeniably

connected to its owner. It cannot be borrowed stolen or easily forged.

Face recognition technology may solve this problem since a face is undeniably

connected to its owner expect in the case of identical twins. It’s nontransferable. The

system can then compare scans to records stored in a central or local database or even on

a smart card. In this, we present a new 3D face recognition approach. Full automation

is provided through the use of advanced multi-stage alignment algorithms, resilience to

facial expressions by employing a deformable model framework, and invariance to 3D

capture devices through suitable preprocessing steps. In addition, scalability in both

time and space is achieved by converting 3D facial scans into compact wavelet metadata.

We present results on the largest known, and now publicly-available, Face Recognition

Grand Challenge 3D facial database consisting of several thousand scans. To the best of

our knowledge, our approach has achieved the highest accuracy on this dataset.

iii

Page 6: 3D Face Recognition System

Contents

1 Introduction 1

2 Database Building 4

2.1 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 Data pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Face Recognition 7

3.1 Face Feature Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.1.1 Face Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.1.2 Eye area and corner detection . . . . . . . . . . . . . . . . . . . . 9

3.1.3 Mouth area detection . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.1.4 Nose area and tip detection . . . . . . . . . . . . . . . . . . . . . 11

3.2 3.2 Face Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.3 Face Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Advantages and Disadvantages 20

4.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.2 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5 Success Rate of System 22

6 Applications 28

7 Conclusion and Future Scope 29

iv

Page 7: 3D Face Recognition System

Bibliography 29

v

Page 8: 3D Face Recognition System

List of Figures

2.1 Acquiring an image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 An output example of the reconstruction process, with and without texture 5

2.3 A face before and after cleaning . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 Ellipse shape face mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.2 Decreasing Threshold Value . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.3 Original face and eye map obtained . . . . . . . . . . . . . . . . . . . . . 10

3.4 Nose area and tip detection . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.5 Choosing nose tip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.6 Nose tip detection for side facing images . . . . . . . . . . . . . . . . . . 14

3.7 Aligning unknown face with database image . . . . . . . . . . . . . . . . 16

3.8 Stripes division of the facial points in x-y plane . . . . . . . . . . . . . . 18

3.9 Surface matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.10 Choosing probable faces by PCA . . . . . . . . . . . . . . . . . . . . . . 19

5.1 Nose detection results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

vi

Page 9: 3D Face Recognition System

List of Tables

5.1 Success Rate Of System In Percentage . . . . . . . . . . . . . . . . . . . 23

5.2 Recognition Rate Over Other Methods In Percentagae . . . . . . . . . . 27

vii

Page 10: 3D Face Recognition System

Chapter 1

Introduction

Facial recognition is used either to identity an unknown person or to verify if the unknown

person is who he claims to be. Important advantage face recognition has over other types

of biometrics recognition like thumbprint and iris is that it obtains information of the

face without the need to touch the subject. This non-intrusive method makes it suitable

for security purpose. Earlier works have focused on face recognition using two dimension

(2D) images. A popular method is the elegance method developed by Turk and Pentland

[1] which uses Principle Component Analysis (PCA). Other method include those Linear

Discriminant Analysis (LDA) [2] and Bayesian methods [3] however, the problem with

2D images is that they are affected by pose and illumination variations [4]. For pose

variations, it is difficult to achieve recognition if the unknown person is facing the side.

This is because most databases kept are those of frontal images. For illumination vari-

ations, different lightning for the probe and databases face may cause recognition error

even though the two faces are the same person.

Therefore, attention started to be given to face recognition using 3D range im-

ages, which contain depth information. These models have the advantage of being able

to be rotated around, therefore solving the pose issue. Illumination problem is also

avoided since the depth and curvature of the face is not affected by lightning. Gordon

[5] calculated the Gaussian curvatures of the face and compared the depth and curvature

features with the unknown and database to obtain recognition. The curvature method

1

Page 11: 3D Face Recognition System

Introduction

was also used by Tanka et al [6]. However, expression changes may cause some features

found to be useless. Therefore, methods that deform the original face model to mimic

different expressions were explored, as what was done by Lu et al [7]. Deformation was

also used to improve alignment performance by Iterative Closet Point (ICP) algorithm

[8]. However, deformation may cause some face information to be lost.

2

Page 12: 3D Face Recognition System

Introduction

To the topic: A new 3D face matching technique is proposed for automatic

3D model based face recognition system. This technique uses a combination of surface

matching followed by PCA+LDA to match an unknown face with those in databases.

Surface matching is initially performed by first detection certain features points on the

face. These points are detected automatically and are then used to align the faces to-

gether. After alignment, surface matching is performed to determine which faces in the

database are a close matches to the unknown probe face. The top 20 matching faces are

then used to perform PCA+LDA on there 3D range images of there 2D intensity image.

The database face which has the lowest Euclidean distance with the probe face will be

identified as the unknown person.

This proposed technique manages to avoid 3 main issues of face recognition which

are pose, illumination and expression changes [4]. To reduce the pose problem, recogni-

tion using 3D images is used. To reduce the illumination problem, depth value is used

instead of color information. And finally, to reduce expression change problem, the mouth

area will be avoided since this area is most susceptible to expression. This is because the

mouth shape changes when a smile, frown or talk. Since the proposed 3D face matching

technique does not require user intervention, this makes it suitable to be implemented

into automatic 3D face recognition system. The following sections further discuss the

proposed 3D face matching technique.

3

Page 13: 3D Face Recognition System

Chapter 2

Database Building

2.1 Data Acquisition

We can use a commercial stereo camera system for our 3D data acquisition. The stereo

camera system is made up of three video cameras and a speckled pattern projector. The

projector projects a random light pattern of dots on the surface of the face. The speckled

pattern is used to establish correspondences between two of the three cameras allowing

the retrieval of depth information. The output is an accurate 3D surface face model. The

third camera captures the texture information and uses a filter to eliminate the speckled

pattern projected onto the face.

Figure 2.1: Acquiring an image

We have chosen this technology over laser scanners because of its speed of acqui-

sition (up to 30 frames/sec. can be captured) and the speed of the data reconstruction (¡

4

Page 14: 3D Face Recognition System

Database Building

5 sec. on a 1GHz machine) thus allowing near real time processing in realistic scenarios.

The speed of the data acquisition also prevents motion artifacts from being introduced

in the 3D acquisition process. Finally, the system is built in a cost effective fashion thus

greatly reducing hardware costs compared to other capturing techniques. The accuracy

of the system is high with a RMS error of less than 1mm for a typical face acquisition.

Studies on the camera system have been validated in clinical settings. The drawbacks of

the acquisition system are its relative bulkiness and since the cameras and projector use

lenses, its limited depth of field. This would make 3D face acquisition more intrusive as

the faces need to be placed within a specific distance from the camera just as with laser

scanners. Example datasets are displayed in Fig 2.2

Figure 2.2: An output example of the reconstruction process, with and without texture

5

Page 15: 3D Face Recognition System

Database Building

2.2 Data pre-processing

In order to speed up the processing and reduce registration errors every subject’s face was

preprocessed. The pre-processing steps were the following three: First of all, an ellipse

outlining the subject’s face was drawn manually on the 2D texture image of the subject

and all vertices from the mesh whose texture coordinates are outside the aforementioned

ellipse are deleted. In this fashion it is possible to eliminate those parts of the mesh that

correspond to neck and hair which delay and confuse the registration process. An exam-

ple of this processing is shown in Fig. 2.3. Future implementations involve automatic

cleaning of the datasets using a template face.

Figure 2.3: A face before and after cleaning

Before the rigid registration in performed on the faces the centre of mass of

all faces is moved to the origin of the coordinate system. This compensates for large

differences in the distance between subjects.

6

Page 16: 3D Face Recognition System

Chapter 3

Face Recognition

3.1 Face Feature Detection

An important step to ensure a good match is aligning the probe and database face

properly. To achieve a good alignment between the unknown probes faces and those

in the database, certain feature points to be found are eye corners and nose tip. The

mouth corners are not needed since the mouth is susceptible to expression changes. The

proposed method managed to automatically detect all eyes corners for good alignment

using a combination of eye map with curative map, unlike Lu [9] method which requires

some manual intervention or Gordon [5] method which only locates the area and not

corners. The nose tip was also successfully detected using a modified version of Xu [10]

method. To obtain these feature points, the following steps are used.

3.1.1 Face Segmentation

The first step is to locate the face in an image. For a headshot like those found in our

face database, this is achieved by locating the skin region in the image. Although Hsu

[11] method is good for locating the skin area, this area may be of weird shape, as shown

in Fig 3.1, holes in the skin area due to eyes and eyebrows which are not in skin color.

For frontal images, these holes can be easily filled up. However, with side facing, this

solution is not feasible. Since the eyes, nose and mouse will be located within the skin

7

Page 17: 3D Face Recognition System

Face Recognition

area, the face mask obtained from HSU [11] method is not suitable.

Therefore, to solve this problem, it is proposed using a combination of HSU[11]

method to locate the skin cluster followed by our method of drawing an ellipse around

the skin cluster to obtain a suitable skin mask for locating the eyes, nose and mouth

later. The elliptical shape is automatically obtained by identifying the top, bottom left

and right boundaries of the skin cluster. With this information, the ellipse can be drawn

around the skin cluster using (1) and (2) [12].

Figure 3.1: Ellipse shape face mask

x= h+a cos(t)(1)

y= k+b sin(t)(2)

Where (h, k) is the center point of ellipse

a= half of the ellipse width and

b= half of ellipse height.

The width is the distance between the left and right boundaries while the height is the

distance between the top and bottom bundaries.

The area within the ellipse is used to find for the eyes, mouth and nose. The

ellipse shape is used because it most resembles the face shape.

8

Page 18: 3D Face Recognition System

Face Recognition

3.1.2 Eye area and corner detection

The next step is to locate the eyes area. Although Hsu [11] method of calculating an

eye map is good, this method may not work for side facing faces. This is because HSU

[11] method always assumes that 2 eyes are present. However, for a 3D face recognition

system, the face could be facing any angle. This means at certain angles, only one eye

can be detected.

Therefore, a method that is able to detect either one or two eye blobs in an eye-

map is proposed . Firstly, an eyemap is calculated using Hsu [11] method . The eye map

is a brighter eye region compared to other parts of image . Then it is scanned decreasingly

through a certain range of eye map threshold and if two possible eye blobs are still not

found in eye map, then it is assumed that only one eye blob can be found and therefore

start searching through the predetermined eye map threshold range again for one eye

blob. The threshold range used is from 255 to 0 since the eye map was normalized to

this range.

A surface on which the Gaussian curvature is everywhere positive is called syn-

clastic, while a surface on which is everywhere negative is called anticlastic. Surfaces

with constant Gaussian curvature include the cone, cylinder, Kuen surface, plane, pseu-

dosphere, and sphere. Of these, the cone and cylinder are the only flat surfaces of

revolution.

Before any two blobs are considered as the two eyes area, a few criteria must

be fulfilled, if two blobs are found, they will only be considered as eyes are when their

y-coordinate is almost the same and their x-coordinate difference is more than a prede-

termined number of pixels, which is dependent on the size of image. This is to ensure

that not any two blobs, like eyes and eyebrow, will be mistakenly considered as eye area.

9

Page 19: 3D Face Recognition System

Face Recognition

Figure 3.2: Decreasing Threshold Value

Figure 3.3: Original face and eye map obtained

10

Page 20: 3D Face Recognition System

Face Recognition

3.1.3 Mouth area detection

Although mouth corners are not required for face alignment; the mouth area is still

needed to help determine the nose area, which is between the eyes and the mouth area.

Location of mouth is obtained by calculating the mouth map. Similar with the eye map,

the mouth region in the mouth map is brighter compared to other regions, as shown in

Fig 3.4. Therefore, the mouth can also be located by setting a certain threshold.

To locate mouth, we proposed setting a maximum threshold and then gradually

reducing this threshold value over a certain range, until a suitable mouth blob is ob-

tained. The criteria for determining whether the blob is the mouth will depend on the

eyes positions found earlier. The threshold range used is from 255 to 0 since the mouth

map was normalized to this range.

If two eyes were found, then the mouth blob is only accepted if its location is

between and below the two of them. If one eye was found, then the mouth blob will be

the one that is below the eye at a predetermined distance, which is dependent on the

image size.

3.1.4 Nose area and tip detection

To locate the nose and tip, Xu [10] method of calculating the effective energy of neigh-

boring pixels as well as mean and variance is useful. However, to determine the nose tip

location, support vector machine (SVM) [16] was used. Since we want to avoid using

such a complex system, another method to locate the nose is used.

In the proposed method, the effective energy, mean and variance values calcu-

lated using Xu [10] method is used in combination with our proposed method of nose

area detection to replace the SVM method for nose detection.

Potential nose candidates will be detected based on their neighboring pixels.

Since the nose is a protruding area this means that all its neighbors should be at a lower

11

Page 21: 3D Face Recognition System

Face Recognition

height in the protruding direction, Using (3), the effective energy of each neighboring

pixels of each pixel is calculated.

Effective Energy= ||P1− P ||cos A (3)

From (3), ||P1−P || is the distance between the pixel and its neighbor while will

be the angle between the normal vector and the P1-P vector. In this proposed method,

neighboring pixels are pixels that surround the pixel being evaluated in the mesh.

The normal vector of each pixel is calculated using the Principal Component

analysis (PCA) method [1]. By doing so, the normal of each point can be estimated,

therefore enabling the protruding nose direction to be found for different face angles.

A pixel will be considered a potential nose candidate when every neighboring

pixel has an effective energy which is negative values. This is because for the neighboring

pixel to be lower than the main pixel in the protruding axis; angle theta will be more

than 90. Therefore, the effective energy obtained will be negative.

Unfortunately, besides the nose candidates, other face areas like forehead, cheeks

and certain folds in the clothes area will also have negative effective energy values. There-

fore, the mean and variance will need to be calculated to further narrow down the nose

candidates.

From Fig.3.5, it can be observed that after thresholding there is a concentration

of nose candidates in the nose, chin lips and cheeks area. This is because those areas are

also protruding like the nose. Therefore, choosing the point with the densest amount of

nose candidates might not produce the correct nose point.

Consequently, the candidates which are within the nose are will be the nose tip.

The nose area is between the eyes and the mouth area, which were found earlier. Since

the face might not be frontal facing, therefore our proposed method was programmed to

12

Page 22: 3D Face Recognition System

Face Recognition

Figure 3.4: Nose area and tip detection

Figure 3.5: Choosing nose tip

13

Page 23: 3D Face Recognition System

Face Recognition

locate either one or two eye blobs, the nose area will be within the triangle as shown in

Fig.3.6. For one eye blob, the nose area will be within the polygon formed by the line

drawn in Fig.3.7, the skin boundary of the other half of the face and a horizontal line

joining the li8ne and boundary at the top and bottom of the line. A left face boundary

will be chosen if the single eye block is a right eye and vice versa. Whether the single eye

blob is a right or left eye is known by observing the position of the eye with regard to the

mouth found. By combining the nose area and nose candidates’ information, the nose

tip is located. Compared to Mian et al. method of nose detection, the proposed method

was able to locate the nose for various face angles. This is because Mian et al. method

locates the nose by taking horizontal slices of the face to locate the most protruding point

on that slice. For that method, it works well frontal faces but has problems with faces

that is almost 90o facing the side. This is because with side faces, the nose will be almost

at the side and part of the nose may be missing. Therefore, the nose will not have much

protrusion as compared to the cheek. Beside that, this method will fail if the subject’s

head is tilted to the left, right, top or bottom. The reason is because at these positions,

the nose at the horizontal slice will not have the largest protrusion compared to other

parts of the face. Comparatively, our proposed method is able to handle different face

angle and tilting.

Figure 3.6: Nose tip detection for side facing images

14

Page 24: 3D Face Recognition System

Face Recognition

3.2 3.2 Face Alignment

For most surface matching methods, face alignment is achieved by using ICP or a mod-

ified version of it. However, this involves performing ICP for each unknown probe face

with every face in the database, which could be inefficient. Besides that, if the number of

set points are large, convergence time for one pair of matching will be very long . Another

drawback with ICP is that the alignment may be wrong if there is no proper initial trans-

formation matrix. This is because the algorithm may converge at a wrong local minimum.

In the proposed method, alignment is achieved by creating a database with all

the faces facing the front and the head is not tilted. Therefore, when the unknown probe

face needs to be compared with the database, it just need to be rotated to the front

and tilted straight before it can be compared to the whole database without changing its

position for every face in the database. For the rotation and tilting, the feature points

found earlier is used.

To rotate the face to the front, the first step is to find the original face angle. This

was done during the nose detection section when the normal of each point was detected

using PCA. Therefore, by rotating the face so that the normal of the nose tip is pointing

directly forward, the face will be frontal facing. Rotation is performed using equations

in [4][5].

Xnew = XcosA + ZsinA (4)

Ynew = Y

Znew = XsinA + ZcosA (5)

Where X, Y and Z are the original values while Xnew, Ynew and Znew are the

rotated values. A is the angle of rotation. Besides the rotation, the face is also translated

to position the nose found earlier into the (0, 0, 0) position. This is to make it convenient

15

Page 25: 3D Face Recognition System

Face Recognition

for the alignment between the unknown and database faces.

To determine if the head is tilted, the eye corners found is observed. If the

corners found do not align in a horizontal straight line, then the face will be rotated

using a variation of (4) to obtain a straight horizontal line. This time, instead of rotating

along the y-axis, tilted face is rotated along the z-axis.

Figure 3.7: Aligning unknown face with database image

3.3 Face Matching

For face matching, combination of surface matching method and PCA+LDA [1][2] is used

to achieve recognition.

16

Page 26: 3D Face Recognition System

Face Recognition

Firstly, the surface matching method is used. The face row with the nose tip is

located and then the horizontal face slice is segmented out slice by slice between the nose

row and 100 rows above it. This should be the area between the forehead and nose which

is less susceptible to facial expression. The distance between the database candidates

and unknown probe slice is calculated by the vertical distance. Since each contour slice

does not have a line equation, a replacement is to connect each neighboring point with a

straight line. Therefore, the vertical distance line will intersect with two lines from the

database and unknown slice and the distance between the two intersection points will

be the distance wanted. Fig.3.10 shows an example of the surface matching method used.

For the PCA+LDA method, instead of using 2D intensity images, the proposed

method uses 3D range information instead to create the LDA eigenspace. For each face

in the database, each horizontal face is slice between the nose and forehead is taken and

the range value for a fixed interval from the left to right is recorded. All of these values

are then systematically aligned into one column per face of a matrix. Therefore, if the

database has 100 faces, then the matrix will have 100 columns. PCA and then LDA are

performed to obtain the LDA eigenspace. After that, each unknown probe face will be

projected on the LDA eigenspace and the nearest face will be considered the most likely

candidate for the probe.

Since range data between the nose and forehead is used to avoid expression

changes, sometimes, two different faces may have same surface distance value. This

means only using surface matching may not be sufficient to match an unknown face.

Therefore, it is proposed that PCA+LDA is also performed for face matching.

Surface matching is used as an initial face candidate filter to increase the chances

of a correct match. It is proposed PCA is followed by LDA because although PCA is

good for dimensionality reduction, it lacks discrimination ability [19]. Therefore, LDA is

performed after PCA to optimize compute the inter and intra class differences to separate

performed on the range value and not the intensity value because after rotation, the 3D

and corresponding original 2D image will no longer be aligned.

17

Page 27: 3D Face Recognition System

Face Recognition

Figure 3.8: Stripes division of the facial points in x-y plane

Figure 3.9: Surface matching

18

Page 28: 3D Face Recognition System

Face Recognition

Figure 3.10: Choosing probable faces by PCA

For each face from database, in the matrix of faces LDA flag is set for those faces

that have more chances of match as explained in fig.3.11. In further step LDA is applied

on only those faces where LDA flag is set. This process reduces processing time of LDA

step taken for each face.

19

Page 29: 3D Face Recognition System

Chapter 4

Advantages and Disadvantages

4.1 Advantages

1. The advantage of the system its ability to compare its facial structure as they ap-

pear in different poses or light conditions variable that could distort a face seen as two

dimensional image.

2. Aging,cosmetic sergery significant changes to facial surfaces such as growing or re-

moving beards can not disrupt the matching process.

3. Then distances are reconfigured as straight line in three dimensional space,creating

a new an abstracted image or signature of human face built on precise mathematical

calculations.

4.2 Disadvantages

1. The fully automatic face recognition system like those we see in science fiction movies

is still to e made.

2. A good nonintrusive recognition system would probably combine face recognition

20

Page 30: 3D Face Recognition System

Advantages and Disadvantages

with other biometrics such as fingerprints and dna could be added for sssome applica-

tions.

3. A face recognition system are not able to recognize face in many different imag-

ing situations.

4. The technology would not work with existing two dimensional images of suspects.

21

Page 31: 3D Face Recognition System

Chapter 5

Success Rate of System

The UND database [20]-[22] was used to verify this method. This is because database

provides a 3D range map as well as the corresponding 2D color image for each face.

Therefore, the x-y position of the face features in the color and range map would be the

same. Besides that, this database contains faces at different angles, therefore fulfilling

our experimental needs.

For the face feature point’s detection section, the proposed face feature and cor-

ners detection method success rates are shown in Table 5.1

22

Page 32: 3D Face Recognition System

Success Rate of System

Table 5.1: Success Rate Of System In PercentageFrontal Face Non-Frontal face

Face Feature Detection 90 83

Corners Detection 90 80

Nose Detection 93 68

From Table 5.1, it can be observed that by using curvature values, the eyes cor-

ners can be detected quite well. The major road block is the face feature detection system

since if the feature detection part fails, the eyes corners will not be able to be detected.

Non frontal faces have lower detection rate because, when the face is almost facing the

side, the ear can be mistaken as the mouth due to it also being a bright region in the

mouth map.

For the proposed nose detection method, Figure 5.1 shows the examples of nose

detection for different face angles on four different subjects. Figure 5.1(a) to (c) shows

the results for three nose detection methods. Figure 5.1(a) uses a method that locates

the nose by seeking the densest nose candidate area without using the nose area. Figure

10(c) uses the Mian et al. method. This method consists of taking each horizontal slice

of the face and a circle centered on each point in the slice is drawn. In each circle, a

triangle is drawn between the centre point and the two intersections between the slice

and the circle. The point with the highest triangle height is each slice is noted. After

processing all the slices, the points with highest triangle height will form the nose ridge.

From these groups of points, the one with the highest height will be taken as the nose tip.

From the results, it can be observed that out of the three methods tested, our proposed

method is the only one that is able to locate the nose correctly for all the four different

subjects.

The densest nose candidate method without using the nose area works for side

faces but not for most frontal faces, as shown Fig.5.1. This is because frontal and lips

area, causing error in nose detection.

For the Mian et al. method, it works well with frontal faces but has problems

with that is almost 90o facing the side. This is because with side faces, the nose will be

23

Page 33: 3D Face Recognition System

Success Rate of System

Figure 5.1: Nose detection results

24

Page 34: 3D Face Recognition System

Success Rate of System

almost at the side and part of the nose may be missing. Therefore, the nose will not have

much protrusion as compared to the cheek, as shown by the second subject in Fig.3.10.

Besides that, this method will fall if the subject’s head is tilted to the left, right, top and

bottom. The reason is because at these positions, the nose at the horizontal slice will

not have the largest protrusion compared to other parts of the face.

After testing with the UND database, which consists of frontal faces and non

frontal faces, the results in Table 5.1 was obtained.

Front the detection rate obtained in Table 5.1, it is observed that frontal face

detection rate is higher than non frontal face. This is because for non frontal faces almost

facing 90 to the side, there is loss of data from the model obtained because the nose is

at the edge. Sometimes, information from the edge is distorted and the nose may not

appear properly.

For the proposed face matching technique, a training set consisting of 80 dif-

ferent people, each with 3 different pictures which resulted in 240 training faces, was

created from the UND database [20]-[22]. The training set is needed when perform-

ing PCA+LDA. A total of 40 unknown probe faces were tested with our proposed face

matching technique. All of them were rotated to the front with their nose at position

(0,0,0). This is to make it easier for the probe face to align itself to the faces in the

database. Table 5.2 shows the result obtained from the surface matching method. PCA

+ LDA method and the proposed surface matching combined with PCA + LDA method.

the proposed method have the highest recognition rates. Since ICP was not used,

the surface matching recognition rate, especially on Rank 1, was not high. However, by

Rank 20, the recognition rate can achieved about 90 percent. Our simulation also shows

that using PCA + LDA on the 3D range value instead of intensity achieved. The reason

LDA is performed after PCA is because the weakness of PCA is it lacks discrimination

ability . Therefore, it is proposed LDA is added to minimizes within-class scatter and

maximizes between class scatter. Therefore, by reducing number of database faces PCA

+ LDA is performed on, the recognition rate should be higher. Since Rank 20 of surface

25

Page 35: 3D Face Recognition System

Success Rate of System

matching produces good recognition rate, PCA + LDA is performed on the top 20 surface

matching result and our experiment shows that recognition rate for Rank 1 increases to

about 83percent. Therefore this proves that our proposed combination method of surface

matching followed by PCA + LDA is successful. Table 6 shows the comparison of the

results obtained from the simulation with results of other methods. These methods are

central vertical profile matching method proposed by Nagamine et al, contour matching,

and combination of central vertical profile matching and contour matching proposed by

Li et al. The central vertical profile matching method extracts the vertical profile of the

face along the nose to perform matching.

26

Page 36: 3D Face Recognition System

Success Rate of System

Table 5.2: Recognition Rate Over Other Methods In PercentagaeRank Surface Matching

method

PCA and LDA

method

Proposed Method

1 40 53 83

2 58 65 85

3 63 65 95

4 65 65 95

5 78 73 95

For contour matching, the contour that is 30mm below the nose tip is used for

matching. Li et al.proposed combining the ranks of central vertical profile product rule,

to obtain a match. Also this method has significantly higher recognition rate compared

to the central vertical profile matching method and contour matching method. The

proposed method also achieve higher Rank 1 recognition rate when compared to Li et al.

method. This shows that the proposed method of performing surface matching followed

by PCA + LDA is a viable face recognition method.

27

Page 37: 3D Face Recognition System

Chapter 6

Applications

1. Face recognition research spans several disciplines such as computer vision, pattern

recognition and machine learning.

2. Face recognition encompasses law enforcement as well as several commercial applica-

tions. Crowd surveillance, electronic line-up, store security and mug shot matching are

some of the security applications.

3. It could also assist in computerized aging simulations as in where shape and texture

normalized 3D-faces were judged to be more attractive and younger than the original

faces.

4. It could assist in the reconstruction of partially damaged face images as in where

PCA analysis of a face database enabled researchers to fill in the information in partially

occluded faces.

5. Research in categorizing gender from biological motion of faces could also benefit

from face recognition algorithms. Traditional face recognition relies on 2D photographs.

However, 2D face recognition systems tend to give a high standard of recognition only

when images are of good quality and the acquisition process can be tightly controlled

28

Page 38: 3D Face Recognition System

Chapter 7

Conclusion and Future Scope

Automatic 3D Model Based Face Recognition System can robustly perform face matching

for faces at various angles. The eye corners were successfully detected using a modified

version of Xu [11] method. Using these feature points, the database and unknown probe

faces were properly aligned. Using our proposed technique of combining surface matching

followed by PCA + LDA on the range values, the unknown probe face was successfully

identified. The pose problem was reduced by using 3D images, the illumination problem

was reduced by using range values and changing problem was reduced by using only the

section between the nose and forehead for face matching. The proposed 3D face matching

technique was able to produce good recognition rates and is fully automatic. No user

intervention was needed in any step of the process, from the facial feature detection

section till the face recognition section. Therefore, this proposed technique is suitable to

be implemented in an automatic 3D face recognition system.

29

Page 39: 3D Face Recognition System

Bibliography

[1] B. Moghaddam and A. Pentland,“Probabilistic visual Learning for Object Represen-

tation ” IEE TPAMI, Vol. 19, pp.696-710, 1997..

[2] W.Zhao, R. Chellappa, A.Rosenfeld, and P. J. Phillipse,“face Recognition: A Litera-

ture Survey” UMD CFAR Technical Report CAR-TR-948, 2000.

[3] P.N. Belhumeur, j. P. Hespanha, and D. j. Kriegman, “Eignfaces vs fisherfaces: Recog-

nition using class specific linear projection”IEEE Trans. Pattern Analysis and Ma-

chine intelligence, 19(7):711-729, Jul,1997.

[4] P.Besl and n. McKay. “A method for registration of 3D shapes” IEE Trans. Pttern

Analysis and Machine intelligence, 14(2):239-256, 1992.

[5] ]. X. Lu, “3D face recognition across pose and expression” Doctoral dissertation, De-

partment of Computer Science and Engineering, Michigan State University, Michi-

gan, USA, 2006.

[6] “A 3D Face Recognition Algorithm Using Histogram-based Features” Xue-

bing Zhou and Helmut Seibert and Christoph Busch and Wolfgang Funk

GRIS, TU Darmstadt Fraunhofer IGD, Germany. 3ZGDV e. V., Germany

http://www.3dface.org/files/papers/zhou-EG08-histogram-face-rec.pdf

[7] Xu, C., Wang, Y., T., Quan, L., 2004“ Robust nose detection in 3D facial data using

local characteristics” Proc. ICIp’04, 1995-1998.

[8] Weisstein, Eric W. ”Principle Curvature” From MathWorld-A Wolfram Web Re-

source. http://mathworld.wolfram.com/PrincipalCurvature.html

30

Page 40: 3D Face Recognition System

BIBLIOGRAPHY

[9] Evaluation of Automatic 4D Face Recognition Using Surface and

Texture Registration Theodoros Papatheodorou, Daniel Rueckert

”Visual Information Processing Group, Imperial College London.”

http://www.akademik.unsri.ac.id/download/journal/files/gdr/21220321.pdf

31