22
Research Article Multithread Face Recognition in Cloud Dakshina Ranjan Kisku 1 and Srinibas Rana 2 1 Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur, Bardhaman, West Bengal 713205, India 2 Department of Computer Science and Engineering, Jalpaiguri Government Engineering College, Jalpaiguri, West Bengal 735102, India Correspondence should be addressed to Dakshina Ranjan Kisku; [email protected] Received 12 June 2016; Revised 13 September 2016; Accepted 12 October 2016 Academic Editor: Carlos Ruiz Copyright © 2016 D. R. Kisku and S. Rana. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification. Recently, biometrics systems have proven to be an essential security tools, in which bulk matching of enrolled people and watch lists is performed every day. To facilitate this process, organizations with large computing facilities need to maintain these facilities. To minimize the burden of maintaining these costly facilities for enrollment and recognition, multinational companies can transfer this responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition. In this paper, we showcase cloud computing-enabled face recognition, which utilizes PCA-characterized face instances and reduces the number of invariant SIFT points that are extracted from each face. To achieve high interclass and low intraclass variances, a set of six PCA-characterized face instances is computed on columns of each face image by varying the number of principal components. Extracted SIFT keypoints are fused using sum and max fusion rules. A novel cohort selection technique is applied to increase the total performance. e proposed protomodel is tested on BioID and FEI face databases, and the efficacy of the system is proven based on the obtained results. We also compare the proposed method with other well-known methods. 1. Introduction Two-dimensional face recognition [1, 2] is considered an unsolved problem in the achievement of robust performance in the area of human identity. Face analysis with vari- ous feature representation techniques has been explored in many studies. Among various feature extraction approaches, appearance-based, feature-based, and model-based tech- niques are popular. Due to changes in illumination, clutter, head poses, and facial expressions (happy, angry, sad, con- fused, and surprised), major salient features and occlusion can cause degradation in face recognition performance, even aſter substantial matching is performed. A limited number of studies address face recognition, which considers noisy features and redundant outliers that are combined with distinctive facial characteristics for matching. ese noisy and redundant features are frequently associated with regular facial characteristics during template generation and matching. Face recognition can negatively impact total performance despite considerable efforts to denoise the effect of redundant characteristics. To overcome this situation, sui- table feature descriptors [3] and feature dimensionality reduction techniques [4] can be employed to obtain com- pact representations. e presence of facial expressions and different lighting conditions can also increase the load in the matching process and complicate face recognition. Face recognition may effectively address these issues. Due to an increase in subject enrollment and bulk matching, a significant number of computing resources can be housed within an organization’s computing facilities. However, these types of facilities have some demerits. To maintain computing resources with existing resources is costly and requires a separate setup. We can overcome these shortcomings by transferring the responsibility of maintaining biometrics resources to a third-party service provider who maintains cloud computing infrastructures, which are housed with their own infrastructures. Integrating cloud computing facilities with a face recognition system can Hindawi Publishing Corporation Journal of Sensors Volume 2016, Article ID 2575904, 21 pages http://dx.doi.org/10.1155/2016/2575904

Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Research ArticleMultithread Face Recognition in Cloud

Dakshina Ranjan Kisku1 and Srinibas Rana2

1Department of Computer Science and Engineering National Institute of Technology Durgapur DurgapurBardhaman West Bengal 713205 India2Department of Computer Science and Engineering Jalpaiguri Government Engineering College JalpaiguriWest Bengal 735102 India

Correspondence should be addressed to Dakshina Ranjan Kisku drkiskugmailcom

Received 12 June 2016 Revised 13 September 2016 Accepted 12 October 2016

Academic Editor Carlos Ruiz

Copyright copy 2016 D R Kisku and S Rana This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification Recentlybiometrics systems have proven to be an essential security tools in which bulk matching of enrolled people and watch lists isperformed every day To facilitate this process organizations with large computing facilities need to maintain these facilities Tominimize the burden of maintaining these costly facilities for enrollment and recognition multinational companies can transferthis responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition In this paper weshowcase cloud computing-enabled face recognition which utilizes PCA-characterized face instances and reduces the numberof invariant SIFT points that are extracted from each face To achieve high interclass and low intraclass variances a set of sixPCA-characterized face instances is computed on columns of each face image by varying the number of principal componentsExtracted SIFT keypoints are fused using sum and max fusion rules A novel cohort selection technique is applied to increase thetotal performance The proposed protomodel is tested on BioID and FEI face databases and the efficacy of the system is provenbased on the obtained results We also compare the proposed method with other well-known methods

1 Introduction

Two-dimensional face recognition [1 2] is considered anunsolved problem in the achievement of robust performancein the area of human identity Face analysis with vari-ous feature representation techniques has been explored inmany studies Among various feature extraction approachesappearance-based feature-based and model-based tech-niques are popular Due to changes in illumination clutterhead poses and facial expressions (happy angry sad con-fused and surprised) major salient features and occlusioncan cause degradation in face recognition performanceeven after substantial matching is performed A limitednumber of studies address face recognition which considersnoisy features and redundant outliers that are combinedwith distinctive facial characteristics for matching Thesenoisy and redundant features are frequently associated withregular facial characteristics during template generationand matching Face recognition can negatively impact total

performance despite considerable efforts to denoise the effectof redundant characteristics To overcome this situation sui-table feature descriptors [3] and feature dimensionalityreduction techniques [4] can be employed to obtain com-pact representations The presence of facial expressions anddifferent lighting conditions can also increase the load inthe matching process and complicate face recognition Facerecognition may effectively address these issues

Due to an increase in subject enrollment and bulkmatching a significant number of computing resources canbe housed within an organizationrsquos computing facilitiesHowever these types of facilities have some demerits Tomaintain computing resources with existing resources iscostly and requires a separate setup We can overcomethese shortcomings by transferring the responsibility ofmaintaining biometrics resources to a third-party serviceprovider who maintains cloud computing infrastructureswhich are housed with their own infrastructures Integratingcloud computing facilities with a face recognition system can

Hindawi Publishing CorporationJournal of SensorsVolume 2016 Article ID 2575904 21 pageshttpdxdoiorg10115520162575904

2 Journal of Sensors

facilitate the recognition of bulk faces from devices such asCCTV cameras webcams mobile phones and tablet PCsThis paradigm can be employed to handle a large numberof people at different times whereas cloud-enabled servicesenable enrollment and the matching process to be remotelyconducted

11 Cloud Framework With the advancement of cloud com-puting [5 6] many organizations are rapidly adopting IT-enabled services that are hosted by cloud service providersBecause these services are provided over a network thecost of hosting services is fixed and predictable Becausecloud computing is very convenient and provides on-demandaccess to a shared pool of configurable computing resources(servers networks storage applications and services) over anetwork this on-demand service can be availed by organi-zations who engage in minimal resource efforts and reliablecloud service providers who host cloud infrastructuresThreetypes of cloud computing models are available namelyPlatform as a Service (PaaS) Software as a Service (SaaS)and Infrastructure as a Service (IaaS) They are collectivelyknown as the SPI model The SaaS model includes varioussoftware and applications that are hosted and run by vendorsand service providers and made available to customers overa network The PaaS model includes delivering operatingsystems and development tools to customers over a networkwithout the need to download and install them The IaaSmodel involves requesting on-demand services of serversstorage networking equipment and various support toolsover a network

Cloud-based biometric infrastructures [7 8] can bedeveloped and hosted at a service providerrsquos location Theon-demand services are available to businesses via net-work connectivity Three models (PaaS SaaS and IaaS) canbe subsequently employed for appropriate physiological orbehavioral biometrics applications Servers and storage canbe employed for storing biometric templates which canbe employed for verification or identification Biometricsensors can be installed on business premises with Internetconnectivity via various networks and can be connected tocloud infrastructures to access stored templates for match-ing and enrollment In addition the biometric sensors areemployed for enrollment and matching and the process canrun with the help of user interfaces applications supporttools networking equipment storage servers and operatingsystems at the service providerrsquos end where the biometricscloud is hosted Businesses and organizations who want toavail a cloud-based facility for enrollment authenticationand identification purposes need to have biometrics sensorsand Internet connectivity The SPI model can be employedfor preprocessing feature extraction template generationand face matching and decisions which can be modeled assoftware models and application programs to be hosted at aservice providerrsquos cloud facility

A few biometrics authentication systems [7ndash9] have beensuccessfully employed in cloud computing infrastructuresThey have facilitated the use of biometrics cloud concepts

to minimize the efforts of resource utilization and bulkmatching

12 Studies on Baseline Face Recognition Because we intro-duce a cloud-based biometric facility to be integrated witha face recognition system a brief review of baseline facerecognition algorithms would be advantageous to develop anefficient cloud-enabled biometric system Face recognition[1 2] is a long-standing computer vision problem that hasgained the attention of researchers whereas appearance-based techniques are employed to analyze the face and reducedimensionality Projecting a face onto a sufficiently low-dimensional feature space while retaining the distinctivefacial characteristics in a feature vector serves a crucial rolein recognizing typical faces The application of appearance-based approaches in face recognition is discussed in [10ndash15]Principal component analysis (PCA) linear discriminantanalysis (LDA) kernel PCA Fisher linear discriminant anal-ysis (FLDA) canonical covariate and the fusion of PCA andLDA are popular approaches in face recognition

Feature-based techniques [16ndash18] have been introducedand successfully applied to represent facial characteristics andencode them to invariant descriptors for face analysis andrecognition Many models such as EBGM [16] SIFT [17ndash19] and SURF [20] have been employed in face recognitionLocal feature-based descriptors can also be employed forobject detection object recognition and image retrievalproblemsThese descriptors are robust to lighting conditionsimage locations and projective transformation and are insen-sitive to noise and image correlations Local descriptors arelocalized and detected at local peaks in a scale-space searchafter a number of stage filtering approaches interest pointsthat are stable over transformations are preserved A focus ontwo things must be established while local feature descriptorsare employed First the algorithm must have the abilityto create a distinctive feature description for differentiatingone interest point from other interest points second thealgorithm should be invariant to camera position subjectposition and lighting conditions A situation may arise whena high-dimensional feature space is projected onto a low-dimensional feature space and local descriptors vary fromone feature space to another feature spaceThus the accuracymay change over the same object while interest pointsare detected in the scale-space over a number of encoun-tered transformations Due to scaling and low-dimensionalprojected variables a higher variance among the observedvariables should be retained in the high-dimensional data of apattern A reduced number of projected variables retain theircharacteristics even after they are represented onto a low-dimensional feature space We can achieve this descriptionwhereas appearance-based techniques are applied to rawimages without the need for preprocessing techniques torestore true pixels or remove noise due to image-capturingsensors A number of representations exist from which wecan extract invariant interest points One appearance-basedtechnique is principal component analysis (PCA) [11 12]

PCA is a simple dimensionality reduction techniquethat has many potential applications in computer vision

Journal of Sensors 3

However despite a few shortcomingsmdashit is restricted toorthogonal linear combinations and has implicit assumptionsof Gaussian distributionsmdashPCA has been proven to be anacclaimed technique due to its simplicity In this study PCAis combinedwith a feature-based technique (SIFT descriptor)for a number of face column instances to be generated byprincipal components Face projections of column vectorsrange from one to six in which the interval one can producehigh variances in the observed variables of the correspondingface image after projecting the high-dimensional face matrixonto a low-dimensional feature space Thus we can obtaina set of low-dimensional feature spaces that correspond toeach column of a single face image Principal componentsare decided by a sequence of six integer numbers rangingfrom 1 to 6 to be considered in order and based on whichsix face instances are generated Unlike a random sequencean ordered default sequence is taken from the mathematicaldefinition of eigenvector and the arithmetic distance of anypoint to its predecessor and successor principal componentis always one A SIFT descriptor which is a suitable appli-cation for these representations can produce multiple setsof invariant interest points without changing the dimensionof each of the keypoint descriptors This process can changethe size of each vector which consists of keypoint descriptorsto be constructed on each projected PCA-characterized faceinstance In addition the SIFT descriptor is robust to partialillumination projective transform image locations rotationand scaling The efficacy of the proposed approach hasbeen tested on frontal view face images with mixed facialexpressions However efficacy is compromised when thehead position of a face image is modified

13 Relevant Face Recognition Approaches In this section weintroduce some related studies and discuss their usefulnessin face recognition For example the algorithm proposed in[21] discusses a method that employs the local gradient patcharound each SIFT point neighborhood and creates PCA-based local descriptors that are compact and invariant How-ever the proposed method does not encode a neighborhoodgradient patch of each point instead it makes a projectedfeature representation in the low-dimensional feature spacewith variable numbers of principal componentsThismethodextracts SIFT interest points from the reduced face instancesAnother study [22] examines the usefulness of the SIFTdescriptor and PCA-WT (WT wavelet transform) in facerecognition An eigenface is extracted from the PCA-waveletsrepresentation and SIFT points are subsequently detectedand encoded to a feature vector However the computationaltime increases due to the complex and layered representationA comparative study [23] employs PCA for neighborhoodgradient patch representation around each SIFT point anda SURF point for invariant feature detection and encodingAlthough PCA reduces the dimensions of the keypointdescriptor and compares the performances of the SIFT andSURF descriptors it is not employed for face recognition butis applied to the image retrieval problem

The remainder of the manuscript is organized as fol-lows Section 2 presents a brief outline of the cloud-based

face recognition system Short descriptions about the SIFTdescriptor and PCA are discussed in Section 3 Section 4exploits the framework and methodology of the proposedmethod The fusion of matching proximities and heuristics-based cohort selection are presented in Section 5 Evaluationof the proposed technique and comparison with other facerecognition systems are exhibited in Section 6 Section 7computes time complexity Conclusions and remarks aremade in Section 8

2 Outline of Cloud-Based Face Recognition

To develop a cloud-based face recognition system a cloudinfrastructure [5 6] has been setup with the help of remoteservers and a webcam-enabled client terminal and tablet PCare connected to the remote servers via Internet connec-tivity Two independent IP addresses are provided to botha client machine and a tablet PC These two IPs help acloud engine to identify client machines from the point atwhich the recognition task is performed Figure 1 shows theoutline of the cloud-enabled face recognition infrastructurein which we establish three points with three different devicesfor enrollment and recognition tasks All other softwareapplication modules (ie preprocessing feature extractiontemplate generation matching fusion and decision) andface databases are placed on servers and a storage device ismaintained in the cloud environment During authenticationor identification sample face images are captured via camerasthat are installed in both the client machine and tablet PCand the captured faces are sent to a remote server In theremote server application software is invoked to performnecessary tasks After the matching of probe face imageswith gallery images that are stored in the database matchingproximity is generated and a decision outcome is sent to theclient machine over the network At the client site the clientmachine displays the correct decision on the screen and theentry of malicious users is restricted Although the proposedsystem is a cloud-based face recognition system our mainfocus lies on a baseline face recognition system After givinga brief introduction of cloud-based infrastructures for facerecognition and use of publicly available face databases suchas FEI and BioID we assume that face images are alreadybeing capturedwith the sensor installed in the clientmachineFurther they are sent to a remote server for matching anddecision

The proposed approach is divided into the followingsteps

(a) As part of the baseline face recognition system theraw face image is localized and then aligned using thealgorithm described in [24] During the experimentface images are employed with and without localizingthe face part

(b) A histogram equalization technique [25] which isconsidered the most elementary image-enhancementtechnique is applied in this step to enhance thecontrast of the face image

(c) In this step PCA [11] is applied to obtainmultiple faceinstances which are determined from each column

4 Journal of Sensors

PaaS software tools

SaaS applications

IaaS infrastructures

StorageDB

Face image

Eigenface

SIFT points

Matching proximity

Enrollmentrecognition

Enrollmentrecognition

Enrollmentrecognition

Cloud

Figure 1 Cloud-enabled face recognition infrastructures

of the original image (they are not the eigenfaces) byvarying principal components from one to six at onedistance unit

(d) From each instance representation SIFT points [1718] are extracted in the scale-space to form anencoded feature vector of keypoint descriptors (119870119894)because keypoint descriptors other than spatial loca-tion scale and orientation are considered featurepoints

(e) SIFT interest points that are extracted from the sixdifferent face instances (npc 1 2 3 4 5 and 6) ofa target face form six different feature vectors Theyare employed to separately match the feature vectorsthat are obtained from a probe face npc refers to thenumber of principal components

(f) In this step matching proximities are determinedfrom the different matching modules and are subse-quently fused using ldquosumrdquo and ldquomaxrdquo fusion rules adecision is made based on the fused matching scores

(g) To enhance the performance and reduce the com-putational complexity we exploit a heuristic-basedcohort selection method during matching and applya T-norm normalization technique to normalize thecohort scores

3 Brief Review of SIFT Descriptor and PCA

In this section the scale-invariant feature transform (SIFT)and principal component analysis (PCA) are describedBoth the SIFT descriptor and PCA are well-known feature-based and appearance-based techniques that are successfullyemployed in many face recognition systems

The SIFT descriptor [17ndash19] has gained significant atten-tion due to its invariant nature and ability to detect stableinterest points around the extrema It has been proven to

be invariant to rotation scaling a projective transformand partial illumination The SIFT descriptor is robust toimage noise and low-level transformations of images In theproposed approach the SIFT descriptor can reduce the facematching complexity and computation time while detectingstable interest points on a face image SIFT points are detectedvia a four-stage filtering approach namely (a) scale-spacedetection (b) keypoint localization (c) orientation assign-ment and (d) keypoint descriptor computation Howeverkeypoint descriptors are employed to generate feature vectorsfor face matching

The proposed face matching algorithm aims to producemultiple face (six face instances) representations that aredetermined from each column of the face image Theseface instances exhibit distinctive characteristics that aredetermined by reducing the dimensions of features thatcomprise the intensity values Reduced dimensionality isachieved by applying a simple feature reduction techniquewhich is known as principal component analysis (PCA)PCA projects a high-dimensional face image onto a low-dimensional feature space where the face instance of highervariance such as eigenvectors in the observed variables isdetermined Details on PCA are provided in [11 12]

4 Framework and Methodology

Main facial feature to frontal face recognition used in theproposed experiment is a set of 128-dimensional vectors ofsquared patch (region) centred at detected and localizedkeypoints in multiple scale This vector describes the localstructure around keypoints under computed scale A key-point if detected in a uniform region cannot be discrim-inating because scale or rotational change does not makethe point distinguishable from its neighbors SIFT detectedkeypoints [17 18] on frontal face are basically various cornerpoints like corners of lip corners of eye nonuniform contour

Journal of Sensors 5

between nose and cheek and so forth which exhibit intensitychanges in two directions SIFT detected these keypoints byapproximating the Laplacian of Gaussian (LoG) in termsof Difference of Gaussian (DoG) As the Gaussian pyramidbuilds image at various scales SIFT extracted keypointsare scale invariant and the computed descriptors remaindiscriminating from coarse to fine matching The keypointscould have been detected by Harris corner detection (notscale invariant) or Hessian corner detection method butmany of these points detected might not be repeatable underlarge scale change Further 128-dimensional feature pointdescriptor obtained by SIFT feature extraction method isorientation normalized so rotational invariant AdditionallySIFT feature descriptor is normalized to unit length to reducethe effect of contrast Maximum value of each dimension ofthe vector is thresholded to 02 and once again normalizedto make the vector robust to certain range of irregularillumination

The proposed face matching method has been developedwith the concept of varying number of principal components(npc) (npc 1 2 3 4 5 and 6) These variations generate thefollowing improvements whereas the system computes faceinstances for matching and recognition

(a) The projection of a face image onto some faceinstances facilitates construction of independent facematchers which can vary their performance whereasSIFT descriptors are applied for extracting invariantinterest points each instance-based matcher is veri-fied for producing matching proximities

(b) An individual matcher exhibits its strength to recog-nize the faces and numbers of SIFT interest pointswhich are extracted from each face instance and sub-stantially change from one projected face to anotherprojected face as with the effect of varying the numberof principal components

(c) Its performance as a robust system is rectified whenthe individual performances are consolidated into asingle matcher by fusion of matching scores

(d) Let 120576119894 119894 = 1 2 119898 be the 119898 eigenvalues arrangedin descending order Let 120576119894 be associated with eigen-vector 119890119894 (119894th principal eigenface in the face space)Then percentage of variance is accounted for by the119894th principal component = (120576119894sum119898119894=1 120576119894) times 100 Gen-erally first few principal components are sufficient tocapture more than 95 of variances But that numberof components is dependent on the training set ofimage space It varies with the face dataset used Inour experiments we observed that taking as few asonly 6 principal components gives a good result andcaptures the variability which is very close to totalvariability produced during generation of multipleface instancesLet training face dataset contain 119899 instances each ofuniform size 119901 (ℎtimes119908 pixels) then face space contains119901-dimensional 119899 sample points and we can deriveat most 119899 minus 1 eigenvectors but each eigenvector isstill 119901-dimensional (119899 ≪ 119901) Now to compare two

face images each containing 119901 number of pixels (ie119901-dimensional vector) it is required to project eachface image onto each of the 119899 minus 1 eigenvectors (eacheigenvector represents one axis of the new 119899 minus 1dimensional coordinate system) So from each of the119898-dimensional face we derive 119899 minus 1 scalar values bydot product of mean centred image space face witheach of the 119899 minus 1 face space eigenvectors Now in thebackward direction given 119899 minus 1 scalar values we canreconstruct the original face image by the weightedcombination of these eigenfaces and adding meancentred data In this reconstruction process an 119894theigenface contributes more than (119894 + 1)th eigenfaceif they are ordered in decreasing eigenvalues Howaccurate the reconstruction is depends on how manyprincipal component (say 119896 119896 = 1 2 119899minus1) we takeinto consideration Practically it is seen that 119896 neednot be equal to 119899 minus 1 to satisfactorily reconstruct theface After a specific value of 119896 (say 119905) contributionfrom (119905 + 1)th eigenvectors up to (119899 minus 1)th vectoris so negligible that it may be discarded withoutlosing significant information indeed there are somemethods like Keisar criterion (discard eigenvectorscorresponding to eigenvalue less than 1) [11 12] Screetest and so forth Sometimes Keisar method retainstoo many eigenvectors whereas Scree test retains toofew In essence the exact value of 119905 is dataset depen-dent In Figures 4 and 6 it is clearly shown that whenwe continue to add one more principal componentthe capture of variability is increasing rapidly withinfirst 6 components But starting from 6th principalcomponent variance capture is almost flat but doesnot reach the total variability (100 marked line)until last principal component So despite the smallcontribution that principal component 7 onwardshave they cannot be redundant

(e) Distinctive and classified characteristics whichare detected in the reduced low-dimensionalface instance support the integration of localtexture information with local shape distortion andillumination changes of neighboring pixels aroundeach keypoint which comprises a vector of 128elements of invariant nature

The proposed methodology is performed with two dif-ferent perspectives namely the first system is implementedwithout face detection and localization and the secondsystem is focused on implementing a face matcher with alocalized and detected face image

41 Face Matching Type I During the initial stage of facerecognition we enhance the contrast of a face image byapplying a histogram equalization technique We apply thiscontrast enhancement technique to increase the count ofSIFT points that are detected in the local scale-space of aface image The face area is localized and aligned by usingthe algorithm given in [24] In the subsequent steps the facearea is projected onto the low-dimensional feature space andan approximated face instance is formed SIFT keypoints are

6 Journal of Sensors

Figure 2 Six different face instances of a single cropped face imagewith variations of principal components

detected and extracted from this approximated face instanceand a feature vector that consists of interest points is createdIn this experiment six different face instances are generatedby varying the number of principal components from one tosix and they are derived from a single face image

PCA-characterized face instances are shown in Figure 2they are arranged according to the order of the consideredprincipal components The same set of PCA-characterizedface instances is extracted from a probe face and feature vec-tors that consist of SIFT interest points are formed Matchingis performedwith their corresponding face instances in termsof the SIFT points obtained from reference face instancesWe apply a k-nearest neighborhood (119896-NN) approach [26] toestablish correspondence and obtain the number of pairs ofmatching keypoints Figure 3 depicts matching pairs of SIFTkeypoints on two sets of face instances which correspondto reference and probe faces Figure 4 shows the amount ofvariance captured by all principal components because thefirst principal component explains approximately 70 of thevariance we expect that additional components are probablyneeded The first four principal components explain the totalvariability in the face image that is depicted in Figure 2

42 FaceMatching Type II The second type of facematchingstrategy utilizes outliers that are available around the mean-ingful facial area to be recognizedThis type of face matchingexamines the effect of outliers and legitimate features togetherthat are employed for face recognition Outliers may belocated on the forehead above the legitimate and localizedface area around the face area to be considered outside themeaningful area on both the ears and on the head Howeverthe effect of outliers is limited because the legitimate interestpoints are primarily detected in major salient areas whichmay be an efficient analysis because face area localizationcannot be performed or sometimes outliers are an effectiveaddition to the face matching process

In the Type I face matching strategy we employ dimen-sionality reduction and project an entire face onto a low-dimensional feature space using PCA and construct sixdifferent face instances with principal components that varybetween one and sixWe extract SIFT keypoints from sixmul-tiscale face instances and create a set of feature vectors Theface matching task is performed using the 119896-NN approachand matching scores are generated as matching proximitiesfrom a pair of reference and probe facesThematching scoresare passed through the fusion module and consolidated toform an integrated vector of matching proximities Figure 5demonstrates matching between pairs of face instances of anentire face which corresponds to the reference face and probe

face for a certain number of principal components Figure 6shows the amount of variances that were captured by allprincipal components because the first principal componentexplains less than 50 of the variance we expect thatadditional components are needed The first two principalcomponents explain approximately two-thirds of the totalvariability in the face image depicted in Figure 5

5 Fusion of Matching Proximities

51 Baseline Approach to Fusion To fuse [26ndash28] the match-ing proximities that are computed from all matchers (basedon principal components) and form a new vector we applytwo popular fusion rules namely ldquosumrdquo and ldquomaxrdquo [26]Let ms = [ms119894119895] be the match scores that are generatedby multiple matchers (119895) where 119894 = 1 2 119899 and 119895 =1 2 119898 Here 119899 denotes the number of match scores thatare generated by each matcher and119898 represents the numberof matchers that are presented to the face matching processConsider that the labels 1205960 and 1205961 are two different classesthat are referred to as the genuine class and the imposterclass respectively We can assign ms to either the label 1205960 orthe label 1205961 based on the class-conditional probability Theprobability of error can be minimized by applying Bayesiandecision theory [29] as follows

Assign ms 997888rarr 120596119894 if

119875 (120596119894 | ms) gt 119875 (120596119895 | ms)

for 119894 = 119895 119894 119895 = 0 1

(1)

The posterior probability 119875(120596119894 | ms) can be derived fromthe class-conditional density function 119901(ms | 120596119894) using Bayesformula as follows

119875 (120596119894 | ms) = 119901 (ms | 120596119894) 119875 (120596119894)119901 (ms) (2)

Therefore 119875(120596119894) is the priori probability of the class label 120596119894and 119901(ms) denotes the probability of encountering msThus(1) can be rewritten as follows

Assign ms 997888rarr 120596119894 if

LR gt 120591 where LR = 119901 (ms | 120596119894)119901 (ms | 120596119895)

119894 = 119895 119894 119895 = 0 1

(3)

The ratio LR is known as the likelihood ratio and 120591 is thepredefined threshold The class-conditional density 119901(ms |120596119894) can be determined from the training match score vectorsusing either parametric or nonparametric techniques How-ever the class-conditional probability density function can beextended to ldquosumrdquo and ldquomaxrdquo fusion rules The max fusioncan be extended as follows

119901 (ms | 120596119894) = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

(119901 (ms119895 | 120596119894)) (4)

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

2 Journal of Sensors

facilitate the recognition of bulk faces from devices such asCCTV cameras webcams mobile phones and tablet PCsThis paradigm can be employed to handle a large numberof people at different times whereas cloud-enabled servicesenable enrollment and the matching process to be remotelyconducted

11 Cloud Framework With the advancement of cloud com-puting [5 6] many organizations are rapidly adopting IT-enabled services that are hosted by cloud service providersBecause these services are provided over a network thecost of hosting services is fixed and predictable Becausecloud computing is very convenient and provides on-demandaccess to a shared pool of configurable computing resources(servers networks storage applications and services) over anetwork this on-demand service can be availed by organi-zations who engage in minimal resource efforts and reliablecloud service providers who host cloud infrastructuresThreetypes of cloud computing models are available namelyPlatform as a Service (PaaS) Software as a Service (SaaS)and Infrastructure as a Service (IaaS) They are collectivelyknown as the SPI model The SaaS model includes varioussoftware and applications that are hosted and run by vendorsand service providers and made available to customers overa network The PaaS model includes delivering operatingsystems and development tools to customers over a networkwithout the need to download and install them The IaaSmodel involves requesting on-demand services of serversstorage networking equipment and various support toolsover a network

Cloud-based biometric infrastructures [7 8] can bedeveloped and hosted at a service providerrsquos location Theon-demand services are available to businesses via net-work connectivity Three models (PaaS SaaS and IaaS) canbe subsequently employed for appropriate physiological orbehavioral biometrics applications Servers and storage canbe employed for storing biometric templates which canbe employed for verification or identification Biometricsensors can be installed on business premises with Internetconnectivity via various networks and can be connected tocloud infrastructures to access stored templates for match-ing and enrollment In addition the biometric sensors areemployed for enrollment and matching and the process canrun with the help of user interfaces applications supporttools networking equipment storage servers and operatingsystems at the service providerrsquos end where the biometricscloud is hosted Businesses and organizations who want toavail a cloud-based facility for enrollment authenticationand identification purposes need to have biometrics sensorsand Internet connectivity The SPI model can be employedfor preprocessing feature extraction template generationand face matching and decisions which can be modeled assoftware models and application programs to be hosted at aservice providerrsquos cloud facility

A few biometrics authentication systems [7ndash9] have beensuccessfully employed in cloud computing infrastructuresThey have facilitated the use of biometrics cloud concepts

to minimize the efforts of resource utilization and bulkmatching

12 Studies on Baseline Face Recognition Because we intro-duce a cloud-based biometric facility to be integrated witha face recognition system a brief review of baseline facerecognition algorithms would be advantageous to develop anefficient cloud-enabled biometric system Face recognition[1 2] is a long-standing computer vision problem that hasgained the attention of researchers whereas appearance-based techniques are employed to analyze the face and reducedimensionality Projecting a face onto a sufficiently low-dimensional feature space while retaining the distinctivefacial characteristics in a feature vector serves a crucial rolein recognizing typical faces The application of appearance-based approaches in face recognition is discussed in [10ndash15]Principal component analysis (PCA) linear discriminantanalysis (LDA) kernel PCA Fisher linear discriminant anal-ysis (FLDA) canonical covariate and the fusion of PCA andLDA are popular approaches in face recognition

Feature-based techniques [16ndash18] have been introducedand successfully applied to represent facial characteristics andencode them to invariant descriptors for face analysis andrecognition Many models such as EBGM [16] SIFT [17ndash19] and SURF [20] have been employed in face recognitionLocal feature-based descriptors can also be employed forobject detection object recognition and image retrievalproblemsThese descriptors are robust to lighting conditionsimage locations and projective transformation and are insen-sitive to noise and image correlations Local descriptors arelocalized and detected at local peaks in a scale-space searchafter a number of stage filtering approaches interest pointsthat are stable over transformations are preserved A focus ontwo things must be established while local feature descriptorsare employed First the algorithm must have the abilityto create a distinctive feature description for differentiatingone interest point from other interest points second thealgorithm should be invariant to camera position subjectposition and lighting conditions A situation may arise whena high-dimensional feature space is projected onto a low-dimensional feature space and local descriptors vary fromone feature space to another feature spaceThus the accuracymay change over the same object while interest pointsare detected in the scale-space over a number of encoun-tered transformations Due to scaling and low-dimensionalprojected variables a higher variance among the observedvariables should be retained in the high-dimensional data of apattern A reduced number of projected variables retain theircharacteristics even after they are represented onto a low-dimensional feature space We can achieve this descriptionwhereas appearance-based techniques are applied to rawimages without the need for preprocessing techniques torestore true pixels or remove noise due to image-capturingsensors A number of representations exist from which wecan extract invariant interest points One appearance-basedtechnique is principal component analysis (PCA) [11 12]

PCA is a simple dimensionality reduction techniquethat has many potential applications in computer vision

Journal of Sensors 3

However despite a few shortcomingsmdashit is restricted toorthogonal linear combinations and has implicit assumptionsof Gaussian distributionsmdashPCA has been proven to be anacclaimed technique due to its simplicity In this study PCAis combinedwith a feature-based technique (SIFT descriptor)for a number of face column instances to be generated byprincipal components Face projections of column vectorsrange from one to six in which the interval one can producehigh variances in the observed variables of the correspondingface image after projecting the high-dimensional face matrixonto a low-dimensional feature space Thus we can obtaina set of low-dimensional feature spaces that correspond toeach column of a single face image Principal componentsare decided by a sequence of six integer numbers rangingfrom 1 to 6 to be considered in order and based on whichsix face instances are generated Unlike a random sequencean ordered default sequence is taken from the mathematicaldefinition of eigenvector and the arithmetic distance of anypoint to its predecessor and successor principal componentis always one A SIFT descriptor which is a suitable appli-cation for these representations can produce multiple setsof invariant interest points without changing the dimensionof each of the keypoint descriptors This process can changethe size of each vector which consists of keypoint descriptorsto be constructed on each projected PCA-characterized faceinstance In addition the SIFT descriptor is robust to partialillumination projective transform image locations rotationand scaling The efficacy of the proposed approach hasbeen tested on frontal view face images with mixed facialexpressions However efficacy is compromised when thehead position of a face image is modified

13 Relevant Face Recognition Approaches In this section weintroduce some related studies and discuss their usefulnessin face recognition For example the algorithm proposed in[21] discusses a method that employs the local gradient patcharound each SIFT point neighborhood and creates PCA-based local descriptors that are compact and invariant How-ever the proposed method does not encode a neighborhoodgradient patch of each point instead it makes a projectedfeature representation in the low-dimensional feature spacewith variable numbers of principal componentsThismethodextracts SIFT interest points from the reduced face instancesAnother study [22] examines the usefulness of the SIFTdescriptor and PCA-WT (WT wavelet transform) in facerecognition An eigenface is extracted from the PCA-waveletsrepresentation and SIFT points are subsequently detectedand encoded to a feature vector However the computationaltime increases due to the complex and layered representationA comparative study [23] employs PCA for neighborhoodgradient patch representation around each SIFT point anda SURF point for invariant feature detection and encodingAlthough PCA reduces the dimensions of the keypointdescriptor and compares the performances of the SIFT andSURF descriptors it is not employed for face recognition butis applied to the image retrieval problem

The remainder of the manuscript is organized as fol-lows Section 2 presents a brief outline of the cloud-based

face recognition system Short descriptions about the SIFTdescriptor and PCA are discussed in Section 3 Section 4exploits the framework and methodology of the proposedmethod The fusion of matching proximities and heuristics-based cohort selection are presented in Section 5 Evaluationof the proposed technique and comparison with other facerecognition systems are exhibited in Section 6 Section 7computes time complexity Conclusions and remarks aremade in Section 8

2 Outline of Cloud-Based Face Recognition

To develop a cloud-based face recognition system a cloudinfrastructure [5 6] has been setup with the help of remoteservers and a webcam-enabled client terminal and tablet PCare connected to the remote servers via Internet connec-tivity Two independent IP addresses are provided to botha client machine and a tablet PC These two IPs help acloud engine to identify client machines from the point atwhich the recognition task is performed Figure 1 shows theoutline of the cloud-enabled face recognition infrastructurein which we establish three points with three different devicesfor enrollment and recognition tasks All other softwareapplication modules (ie preprocessing feature extractiontemplate generation matching fusion and decision) andface databases are placed on servers and a storage device ismaintained in the cloud environment During authenticationor identification sample face images are captured via camerasthat are installed in both the client machine and tablet PCand the captured faces are sent to a remote server In theremote server application software is invoked to performnecessary tasks After the matching of probe face imageswith gallery images that are stored in the database matchingproximity is generated and a decision outcome is sent to theclient machine over the network At the client site the clientmachine displays the correct decision on the screen and theentry of malicious users is restricted Although the proposedsystem is a cloud-based face recognition system our mainfocus lies on a baseline face recognition system After givinga brief introduction of cloud-based infrastructures for facerecognition and use of publicly available face databases suchas FEI and BioID we assume that face images are alreadybeing capturedwith the sensor installed in the clientmachineFurther they are sent to a remote server for matching anddecision

The proposed approach is divided into the followingsteps

(a) As part of the baseline face recognition system theraw face image is localized and then aligned using thealgorithm described in [24] During the experimentface images are employed with and without localizingthe face part

(b) A histogram equalization technique [25] which isconsidered the most elementary image-enhancementtechnique is applied in this step to enhance thecontrast of the face image

(c) In this step PCA [11] is applied to obtainmultiple faceinstances which are determined from each column

4 Journal of Sensors

PaaS software tools

SaaS applications

IaaS infrastructures

StorageDB

Face image

Eigenface

SIFT points

Matching proximity

Enrollmentrecognition

Enrollmentrecognition

Enrollmentrecognition

Cloud

Figure 1 Cloud-enabled face recognition infrastructures

of the original image (they are not the eigenfaces) byvarying principal components from one to six at onedistance unit

(d) From each instance representation SIFT points [1718] are extracted in the scale-space to form anencoded feature vector of keypoint descriptors (119870119894)because keypoint descriptors other than spatial loca-tion scale and orientation are considered featurepoints

(e) SIFT interest points that are extracted from the sixdifferent face instances (npc 1 2 3 4 5 and 6) ofa target face form six different feature vectors Theyare employed to separately match the feature vectorsthat are obtained from a probe face npc refers to thenumber of principal components

(f) In this step matching proximities are determinedfrom the different matching modules and are subse-quently fused using ldquosumrdquo and ldquomaxrdquo fusion rules adecision is made based on the fused matching scores

(g) To enhance the performance and reduce the com-putational complexity we exploit a heuristic-basedcohort selection method during matching and applya T-norm normalization technique to normalize thecohort scores

3 Brief Review of SIFT Descriptor and PCA

In this section the scale-invariant feature transform (SIFT)and principal component analysis (PCA) are describedBoth the SIFT descriptor and PCA are well-known feature-based and appearance-based techniques that are successfullyemployed in many face recognition systems

The SIFT descriptor [17ndash19] has gained significant atten-tion due to its invariant nature and ability to detect stableinterest points around the extrema It has been proven to

be invariant to rotation scaling a projective transformand partial illumination The SIFT descriptor is robust toimage noise and low-level transformations of images In theproposed approach the SIFT descriptor can reduce the facematching complexity and computation time while detectingstable interest points on a face image SIFT points are detectedvia a four-stage filtering approach namely (a) scale-spacedetection (b) keypoint localization (c) orientation assign-ment and (d) keypoint descriptor computation Howeverkeypoint descriptors are employed to generate feature vectorsfor face matching

The proposed face matching algorithm aims to producemultiple face (six face instances) representations that aredetermined from each column of the face image Theseface instances exhibit distinctive characteristics that aredetermined by reducing the dimensions of features thatcomprise the intensity values Reduced dimensionality isachieved by applying a simple feature reduction techniquewhich is known as principal component analysis (PCA)PCA projects a high-dimensional face image onto a low-dimensional feature space where the face instance of highervariance such as eigenvectors in the observed variables isdetermined Details on PCA are provided in [11 12]

4 Framework and Methodology

Main facial feature to frontal face recognition used in theproposed experiment is a set of 128-dimensional vectors ofsquared patch (region) centred at detected and localizedkeypoints in multiple scale This vector describes the localstructure around keypoints under computed scale A key-point if detected in a uniform region cannot be discrim-inating because scale or rotational change does not makethe point distinguishable from its neighbors SIFT detectedkeypoints [17 18] on frontal face are basically various cornerpoints like corners of lip corners of eye nonuniform contour

Journal of Sensors 5

between nose and cheek and so forth which exhibit intensitychanges in two directions SIFT detected these keypoints byapproximating the Laplacian of Gaussian (LoG) in termsof Difference of Gaussian (DoG) As the Gaussian pyramidbuilds image at various scales SIFT extracted keypointsare scale invariant and the computed descriptors remaindiscriminating from coarse to fine matching The keypointscould have been detected by Harris corner detection (notscale invariant) or Hessian corner detection method butmany of these points detected might not be repeatable underlarge scale change Further 128-dimensional feature pointdescriptor obtained by SIFT feature extraction method isorientation normalized so rotational invariant AdditionallySIFT feature descriptor is normalized to unit length to reducethe effect of contrast Maximum value of each dimension ofthe vector is thresholded to 02 and once again normalizedto make the vector robust to certain range of irregularillumination

The proposed face matching method has been developedwith the concept of varying number of principal components(npc) (npc 1 2 3 4 5 and 6) These variations generate thefollowing improvements whereas the system computes faceinstances for matching and recognition

(a) The projection of a face image onto some faceinstances facilitates construction of independent facematchers which can vary their performance whereasSIFT descriptors are applied for extracting invariantinterest points each instance-based matcher is veri-fied for producing matching proximities

(b) An individual matcher exhibits its strength to recog-nize the faces and numbers of SIFT interest pointswhich are extracted from each face instance and sub-stantially change from one projected face to anotherprojected face as with the effect of varying the numberof principal components

(c) Its performance as a robust system is rectified whenthe individual performances are consolidated into asingle matcher by fusion of matching scores

(d) Let 120576119894 119894 = 1 2 119898 be the 119898 eigenvalues arrangedin descending order Let 120576119894 be associated with eigen-vector 119890119894 (119894th principal eigenface in the face space)Then percentage of variance is accounted for by the119894th principal component = (120576119894sum119898119894=1 120576119894) times 100 Gen-erally first few principal components are sufficient tocapture more than 95 of variances But that numberof components is dependent on the training set ofimage space It varies with the face dataset used Inour experiments we observed that taking as few asonly 6 principal components gives a good result andcaptures the variability which is very close to totalvariability produced during generation of multipleface instancesLet training face dataset contain 119899 instances each ofuniform size 119901 (ℎtimes119908 pixels) then face space contains119901-dimensional 119899 sample points and we can deriveat most 119899 minus 1 eigenvectors but each eigenvector isstill 119901-dimensional (119899 ≪ 119901) Now to compare two

face images each containing 119901 number of pixels (ie119901-dimensional vector) it is required to project eachface image onto each of the 119899 minus 1 eigenvectors (eacheigenvector represents one axis of the new 119899 minus 1dimensional coordinate system) So from each of the119898-dimensional face we derive 119899 minus 1 scalar values bydot product of mean centred image space face witheach of the 119899 minus 1 face space eigenvectors Now in thebackward direction given 119899 minus 1 scalar values we canreconstruct the original face image by the weightedcombination of these eigenfaces and adding meancentred data In this reconstruction process an 119894theigenface contributes more than (119894 + 1)th eigenfaceif they are ordered in decreasing eigenvalues Howaccurate the reconstruction is depends on how manyprincipal component (say 119896 119896 = 1 2 119899minus1) we takeinto consideration Practically it is seen that 119896 neednot be equal to 119899 minus 1 to satisfactorily reconstruct theface After a specific value of 119896 (say 119905) contributionfrom (119905 + 1)th eigenvectors up to (119899 minus 1)th vectoris so negligible that it may be discarded withoutlosing significant information indeed there are somemethods like Keisar criterion (discard eigenvectorscorresponding to eigenvalue less than 1) [11 12] Screetest and so forth Sometimes Keisar method retainstoo many eigenvectors whereas Scree test retains toofew In essence the exact value of 119905 is dataset depen-dent In Figures 4 and 6 it is clearly shown that whenwe continue to add one more principal componentthe capture of variability is increasing rapidly withinfirst 6 components But starting from 6th principalcomponent variance capture is almost flat but doesnot reach the total variability (100 marked line)until last principal component So despite the smallcontribution that principal component 7 onwardshave they cannot be redundant

(e) Distinctive and classified characteristics whichare detected in the reduced low-dimensionalface instance support the integration of localtexture information with local shape distortion andillumination changes of neighboring pixels aroundeach keypoint which comprises a vector of 128elements of invariant nature

The proposed methodology is performed with two dif-ferent perspectives namely the first system is implementedwithout face detection and localization and the secondsystem is focused on implementing a face matcher with alocalized and detected face image

41 Face Matching Type I During the initial stage of facerecognition we enhance the contrast of a face image byapplying a histogram equalization technique We apply thiscontrast enhancement technique to increase the count ofSIFT points that are detected in the local scale-space of aface image The face area is localized and aligned by usingthe algorithm given in [24] In the subsequent steps the facearea is projected onto the low-dimensional feature space andan approximated face instance is formed SIFT keypoints are

6 Journal of Sensors

Figure 2 Six different face instances of a single cropped face imagewith variations of principal components

detected and extracted from this approximated face instanceand a feature vector that consists of interest points is createdIn this experiment six different face instances are generatedby varying the number of principal components from one tosix and they are derived from a single face image

PCA-characterized face instances are shown in Figure 2they are arranged according to the order of the consideredprincipal components The same set of PCA-characterizedface instances is extracted from a probe face and feature vec-tors that consist of SIFT interest points are formed Matchingis performedwith their corresponding face instances in termsof the SIFT points obtained from reference face instancesWe apply a k-nearest neighborhood (119896-NN) approach [26] toestablish correspondence and obtain the number of pairs ofmatching keypoints Figure 3 depicts matching pairs of SIFTkeypoints on two sets of face instances which correspondto reference and probe faces Figure 4 shows the amount ofvariance captured by all principal components because thefirst principal component explains approximately 70 of thevariance we expect that additional components are probablyneeded The first four principal components explain the totalvariability in the face image that is depicted in Figure 2

42 FaceMatching Type II The second type of facematchingstrategy utilizes outliers that are available around the mean-ingful facial area to be recognizedThis type of face matchingexamines the effect of outliers and legitimate features togetherthat are employed for face recognition Outliers may belocated on the forehead above the legitimate and localizedface area around the face area to be considered outside themeaningful area on both the ears and on the head Howeverthe effect of outliers is limited because the legitimate interestpoints are primarily detected in major salient areas whichmay be an efficient analysis because face area localizationcannot be performed or sometimes outliers are an effectiveaddition to the face matching process

In the Type I face matching strategy we employ dimen-sionality reduction and project an entire face onto a low-dimensional feature space using PCA and construct sixdifferent face instances with principal components that varybetween one and sixWe extract SIFT keypoints from sixmul-tiscale face instances and create a set of feature vectors Theface matching task is performed using the 119896-NN approachand matching scores are generated as matching proximitiesfrom a pair of reference and probe facesThematching scoresare passed through the fusion module and consolidated toform an integrated vector of matching proximities Figure 5demonstrates matching between pairs of face instances of anentire face which corresponds to the reference face and probe

face for a certain number of principal components Figure 6shows the amount of variances that were captured by allprincipal components because the first principal componentexplains less than 50 of the variance we expect thatadditional components are needed The first two principalcomponents explain approximately two-thirds of the totalvariability in the face image depicted in Figure 5

5 Fusion of Matching Proximities

51 Baseline Approach to Fusion To fuse [26ndash28] the match-ing proximities that are computed from all matchers (basedon principal components) and form a new vector we applytwo popular fusion rules namely ldquosumrdquo and ldquomaxrdquo [26]Let ms = [ms119894119895] be the match scores that are generatedby multiple matchers (119895) where 119894 = 1 2 119899 and 119895 =1 2 119898 Here 119899 denotes the number of match scores thatare generated by each matcher and119898 represents the numberof matchers that are presented to the face matching processConsider that the labels 1205960 and 1205961 are two different classesthat are referred to as the genuine class and the imposterclass respectively We can assign ms to either the label 1205960 orthe label 1205961 based on the class-conditional probability Theprobability of error can be minimized by applying Bayesiandecision theory [29] as follows

Assign ms 997888rarr 120596119894 if

119875 (120596119894 | ms) gt 119875 (120596119895 | ms)

for 119894 = 119895 119894 119895 = 0 1

(1)

The posterior probability 119875(120596119894 | ms) can be derived fromthe class-conditional density function 119901(ms | 120596119894) using Bayesformula as follows

119875 (120596119894 | ms) = 119901 (ms | 120596119894) 119875 (120596119894)119901 (ms) (2)

Therefore 119875(120596119894) is the priori probability of the class label 120596119894and 119901(ms) denotes the probability of encountering msThus(1) can be rewritten as follows

Assign ms 997888rarr 120596119894 if

LR gt 120591 where LR = 119901 (ms | 120596119894)119901 (ms | 120596119895)

119894 = 119895 119894 119895 = 0 1

(3)

The ratio LR is known as the likelihood ratio and 120591 is thepredefined threshold The class-conditional density 119901(ms |120596119894) can be determined from the training match score vectorsusing either parametric or nonparametric techniques How-ever the class-conditional probability density function can beextended to ldquosumrdquo and ldquomaxrdquo fusion rules The max fusioncan be extended as follows

119901 (ms | 120596119894) = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

(119901 (ms119895 | 120596119894)) (4)

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 3

However despite a few shortcomingsmdashit is restricted toorthogonal linear combinations and has implicit assumptionsof Gaussian distributionsmdashPCA has been proven to be anacclaimed technique due to its simplicity In this study PCAis combinedwith a feature-based technique (SIFT descriptor)for a number of face column instances to be generated byprincipal components Face projections of column vectorsrange from one to six in which the interval one can producehigh variances in the observed variables of the correspondingface image after projecting the high-dimensional face matrixonto a low-dimensional feature space Thus we can obtaina set of low-dimensional feature spaces that correspond toeach column of a single face image Principal componentsare decided by a sequence of six integer numbers rangingfrom 1 to 6 to be considered in order and based on whichsix face instances are generated Unlike a random sequencean ordered default sequence is taken from the mathematicaldefinition of eigenvector and the arithmetic distance of anypoint to its predecessor and successor principal componentis always one A SIFT descriptor which is a suitable appli-cation for these representations can produce multiple setsof invariant interest points without changing the dimensionof each of the keypoint descriptors This process can changethe size of each vector which consists of keypoint descriptorsto be constructed on each projected PCA-characterized faceinstance In addition the SIFT descriptor is robust to partialillumination projective transform image locations rotationand scaling The efficacy of the proposed approach hasbeen tested on frontal view face images with mixed facialexpressions However efficacy is compromised when thehead position of a face image is modified

13 Relevant Face Recognition Approaches In this section weintroduce some related studies and discuss their usefulnessin face recognition For example the algorithm proposed in[21] discusses a method that employs the local gradient patcharound each SIFT point neighborhood and creates PCA-based local descriptors that are compact and invariant How-ever the proposed method does not encode a neighborhoodgradient patch of each point instead it makes a projectedfeature representation in the low-dimensional feature spacewith variable numbers of principal componentsThismethodextracts SIFT interest points from the reduced face instancesAnother study [22] examines the usefulness of the SIFTdescriptor and PCA-WT (WT wavelet transform) in facerecognition An eigenface is extracted from the PCA-waveletsrepresentation and SIFT points are subsequently detectedand encoded to a feature vector However the computationaltime increases due to the complex and layered representationA comparative study [23] employs PCA for neighborhoodgradient patch representation around each SIFT point anda SURF point for invariant feature detection and encodingAlthough PCA reduces the dimensions of the keypointdescriptor and compares the performances of the SIFT andSURF descriptors it is not employed for face recognition butis applied to the image retrieval problem

The remainder of the manuscript is organized as fol-lows Section 2 presents a brief outline of the cloud-based

face recognition system Short descriptions about the SIFTdescriptor and PCA are discussed in Section 3 Section 4exploits the framework and methodology of the proposedmethod The fusion of matching proximities and heuristics-based cohort selection are presented in Section 5 Evaluationof the proposed technique and comparison with other facerecognition systems are exhibited in Section 6 Section 7computes time complexity Conclusions and remarks aremade in Section 8

2 Outline of Cloud-Based Face Recognition

To develop a cloud-based face recognition system a cloudinfrastructure [5 6] has been setup with the help of remoteservers and a webcam-enabled client terminal and tablet PCare connected to the remote servers via Internet connec-tivity Two independent IP addresses are provided to botha client machine and a tablet PC These two IPs help acloud engine to identify client machines from the point atwhich the recognition task is performed Figure 1 shows theoutline of the cloud-enabled face recognition infrastructurein which we establish three points with three different devicesfor enrollment and recognition tasks All other softwareapplication modules (ie preprocessing feature extractiontemplate generation matching fusion and decision) andface databases are placed on servers and a storage device ismaintained in the cloud environment During authenticationor identification sample face images are captured via camerasthat are installed in both the client machine and tablet PCand the captured faces are sent to a remote server In theremote server application software is invoked to performnecessary tasks After the matching of probe face imageswith gallery images that are stored in the database matchingproximity is generated and a decision outcome is sent to theclient machine over the network At the client site the clientmachine displays the correct decision on the screen and theentry of malicious users is restricted Although the proposedsystem is a cloud-based face recognition system our mainfocus lies on a baseline face recognition system After givinga brief introduction of cloud-based infrastructures for facerecognition and use of publicly available face databases suchas FEI and BioID we assume that face images are alreadybeing capturedwith the sensor installed in the clientmachineFurther they are sent to a remote server for matching anddecision

The proposed approach is divided into the followingsteps

(a) As part of the baseline face recognition system theraw face image is localized and then aligned using thealgorithm described in [24] During the experimentface images are employed with and without localizingthe face part

(b) A histogram equalization technique [25] which isconsidered the most elementary image-enhancementtechnique is applied in this step to enhance thecontrast of the face image

(c) In this step PCA [11] is applied to obtainmultiple faceinstances which are determined from each column

4 Journal of Sensors

PaaS software tools

SaaS applications

IaaS infrastructures

StorageDB

Face image

Eigenface

SIFT points

Matching proximity

Enrollmentrecognition

Enrollmentrecognition

Enrollmentrecognition

Cloud

Figure 1 Cloud-enabled face recognition infrastructures

of the original image (they are not the eigenfaces) byvarying principal components from one to six at onedistance unit

(d) From each instance representation SIFT points [1718] are extracted in the scale-space to form anencoded feature vector of keypoint descriptors (119870119894)because keypoint descriptors other than spatial loca-tion scale and orientation are considered featurepoints

(e) SIFT interest points that are extracted from the sixdifferent face instances (npc 1 2 3 4 5 and 6) ofa target face form six different feature vectors Theyare employed to separately match the feature vectorsthat are obtained from a probe face npc refers to thenumber of principal components

(f) In this step matching proximities are determinedfrom the different matching modules and are subse-quently fused using ldquosumrdquo and ldquomaxrdquo fusion rules adecision is made based on the fused matching scores

(g) To enhance the performance and reduce the com-putational complexity we exploit a heuristic-basedcohort selection method during matching and applya T-norm normalization technique to normalize thecohort scores

3 Brief Review of SIFT Descriptor and PCA

In this section the scale-invariant feature transform (SIFT)and principal component analysis (PCA) are describedBoth the SIFT descriptor and PCA are well-known feature-based and appearance-based techniques that are successfullyemployed in many face recognition systems

The SIFT descriptor [17ndash19] has gained significant atten-tion due to its invariant nature and ability to detect stableinterest points around the extrema It has been proven to

be invariant to rotation scaling a projective transformand partial illumination The SIFT descriptor is robust toimage noise and low-level transformations of images In theproposed approach the SIFT descriptor can reduce the facematching complexity and computation time while detectingstable interest points on a face image SIFT points are detectedvia a four-stage filtering approach namely (a) scale-spacedetection (b) keypoint localization (c) orientation assign-ment and (d) keypoint descriptor computation Howeverkeypoint descriptors are employed to generate feature vectorsfor face matching

The proposed face matching algorithm aims to producemultiple face (six face instances) representations that aredetermined from each column of the face image Theseface instances exhibit distinctive characteristics that aredetermined by reducing the dimensions of features thatcomprise the intensity values Reduced dimensionality isachieved by applying a simple feature reduction techniquewhich is known as principal component analysis (PCA)PCA projects a high-dimensional face image onto a low-dimensional feature space where the face instance of highervariance such as eigenvectors in the observed variables isdetermined Details on PCA are provided in [11 12]

4 Framework and Methodology

Main facial feature to frontal face recognition used in theproposed experiment is a set of 128-dimensional vectors ofsquared patch (region) centred at detected and localizedkeypoints in multiple scale This vector describes the localstructure around keypoints under computed scale A key-point if detected in a uniform region cannot be discrim-inating because scale or rotational change does not makethe point distinguishable from its neighbors SIFT detectedkeypoints [17 18] on frontal face are basically various cornerpoints like corners of lip corners of eye nonuniform contour

Journal of Sensors 5

between nose and cheek and so forth which exhibit intensitychanges in two directions SIFT detected these keypoints byapproximating the Laplacian of Gaussian (LoG) in termsof Difference of Gaussian (DoG) As the Gaussian pyramidbuilds image at various scales SIFT extracted keypointsare scale invariant and the computed descriptors remaindiscriminating from coarse to fine matching The keypointscould have been detected by Harris corner detection (notscale invariant) or Hessian corner detection method butmany of these points detected might not be repeatable underlarge scale change Further 128-dimensional feature pointdescriptor obtained by SIFT feature extraction method isorientation normalized so rotational invariant AdditionallySIFT feature descriptor is normalized to unit length to reducethe effect of contrast Maximum value of each dimension ofthe vector is thresholded to 02 and once again normalizedto make the vector robust to certain range of irregularillumination

The proposed face matching method has been developedwith the concept of varying number of principal components(npc) (npc 1 2 3 4 5 and 6) These variations generate thefollowing improvements whereas the system computes faceinstances for matching and recognition

(a) The projection of a face image onto some faceinstances facilitates construction of independent facematchers which can vary their performance whereasSIFT descriptors are applied for extracting invariantinterest points each instance-based matcher is veri-fied for producing matching proximities

(b) An individual matcher exhibits its strength to recog-nize the faces and numbers of SIFT interest pointswhich are extracted from each face instance and sub-stantially change from one projected face to anotherprojected face as with the effect of varying the numberof principal components

(c) Its performance as a robust system is rectified whenthe individual performances are consolidated into asingle matcher by fusion of matching scores

(d) Let 120576119894 119894 = 1 2 119898 be the 119898 eigenvalues arrangedin descending order Let 120576119894 be associated with eigen-vector 119890119894 (119894th principal eigenface in the face space)Then percentage of variance is accounted for by the119894th principal component = (120576119894sum119898119894=1 120576119894) times 100 Gen-erally first few principal components are sufficient tocapture more than 95 of variances But that numberof components is dependent on the training set ofimage space It varies with the face dataset used Inour experiments we observed that taking as few asonly 6 principal components gives a good result andcaptures the variability which is very close to totalvariability produced during generation of multipleface instancesLet training face dataset contain 119899 instances each ofuniform size 119901 (ℎtimes119908 pixels) then face space contains119901-dimensional 119899 sample points and we can deriveat most 119899 minus 1 eigenvectors but each eigenvector isstill 119901-dimensional (119899 ≪ 119901) Now to compare two

face images each containing 119901 number of pixels (ie119901-dimensional vector) it is required to project eachface image onto each of the 119899 minus 1 eigenvectors (eacheigenvector represents one axis of the new 119899 minus 1dimensional coordinate system) So from each of the119898-dimensional face we derive 119899 minus 1 scalar values bydot product of mean centred image space face witheach of the 119899 minus 1 face space eigenvectors Now in thebackward direction given 119899 minus 1 scalar values we canreconstruct the original face image by the weightedcombination of these eigenfaces and adding meancentred data In this reconstruction process an 119894theigenface contributes more than (119894 + 1)th eigenfaceif they are ordered in decreasing eigenvalues Howaccurate the reconstruction is depends on how manyprincipal component (say 119896 119896 = 1 2 119899minus1) we takeinto consideration Practically it is seen that 119896 neednot be equal to 119899 minus 1 to satisfactorily reconstruct theface After a specific value of 119896 (say 119905) contributionfrom (119905 + 1)th eigenvectors up to (119899 minus 1)th vectoris so negligible that it may be discarded withoutlosing significant information indeed there are somemethods like Keisar criterion (discard eigenvectorscorresponding to eigenvalue less than 1) [11 12] Screetest and so forth Sometimes Keisar method retainstoo many eigenvectors whereas Scree test retains toofew In essence the exact value of 119905 is dataset depen-dent In Figures 4 and 6 it is clearly shown that whenwe continue to add one more principal componentthe capture of variability is increasing rapidly withinfirst 6 components But starting from 6th principalcomponent variance capture is almost flat but doesnot reach the total variability (100 marked line)until last principal component So despite the smallcontribution that principal component 7 onwardshave they cannot be redundant

(e) Distinctive and classified characteristics whichare detected in the reduced low-dimensionalface instance support the integration of localtexture information with local shape distortion andillumination changes of neighboring pixels aroundeach keypoint which comprises a vector of 128elements of invariant nature

The proposed methodology is performed with two dif-ferent perspectives namely the first system is implementedwithout face detection and localization and the secondsystem is focused on implementing a face matcher with alocalized and detected face image

41 Face Matching Type I During the initial stage of facerecognition we enhance the contrast of a face image byapplying a histogram equalization technique We apply thiscontrast enhancement technique to increase the count ofSIFT points that are detected in the local scale-space of aface image The face area is localized and aligned by usingthe algorithm given in [24] In the subsequent steps the facearea is projected onto the low-dimensional feature space andan approximated face instance is formed SIFT keypoints are

6 Journal of Sensors

Figure 2 Six different face instances of a single cropped face imagewith variations of principal components

detected and extracted from this approximated face instanceand a feature vector that consists of interest points is createdIn this experiment six different face instances are generatedby varying the number of principal components from one tosix and they are derived from a single face image

PCA-characterized face instances are shown in Figure 2they are arranged according to the order of the consideredprincipal components The same set of PCA-characterizedface instances is extracted from a probe face and feature vec-tors that consist of SIFT interest points are formed Matchingis performedwith their corresponding face instances in termsof the SIFT points obtained from reference face instancesWe apply a k-nearest neighborhood (119896-NN) approach [26] toestablish correspondence and obtain the number of pairs ofmatching keypoints Figure 3 depicts matching pairs of SIFTkeypoints on two sets of face instances which correspondto reference and probe faces Figure 4 shows the amount ofvariance captured by all principal components because thefirst principal component explains approximately 70 of thevariance we expect that additional components are probablyneeded The first four principal components explain the totalvariability in the face image that is depicted in Figure 2

42 FaceMatching Type II The second type of facematchingstrategy utilizes outliers that are available around the mean-ingful facial area to be recognizedThis type of face matchingexamines the effect of outliers and legitimate features togetherthat are employed for face recognition Outliers may belocated on the forehead above the legitimate and localizedface area around the face area to be considered outside themeaningful area on both the ears and on the head Howeverthe effect of outliers is limited because the legitimate interestpoints are primarily detected in major salient areas whichmay be an efficient analysis because face area localizationcannot be performed or sometimes outliers are an effectiveaddition to the face matching process

In the Type I face matching strategy we employ dimen-sionality reduction and project an entire face onto a low-dimensional feature space using PCA and construct sixdifferent face instances with principal components that varybetween one and sixWe extract SIFT keypoints from sixmul-tiscale face instances and create a set of feature vectors Theface matching task is performed using the 119896-NN approachand matching scores are generated as matching proximitiesfrom a pair of reference and probe facesThematching scoresare passed through the fusion module and consolidated toform an integrated vector of matching proximities Figure 5demonstrates matching between pairs of face instances of anentire face which corresponds to the reference face and probe

face for a certain number of principal components Figure 6shows the amount of variances that were captured by allprincipal components because the first principal componentexplains less than 50 of the variance we expect thatadditional components are needed The first two principalcomponents explain approximately two-thirds of the totalvariability in the face image depicted in Figure 5

5 Fusion of Matching Proximities

51 Baseline Approach to Fusion To fuse [26ndash28] the match-ing proximities that are computed from all matchers (basedon principal components) and form a new vector we applytwo popular fusion rules namely ldquosumrdquo and ldquomaxrdquo [26]Let ms = [ms119894119895] be the match scores that are generatedby multiple matchers (119895) where 119894 = 1 2 119899 and 119895 =1 2 119898 Here 119899 denotes the number of match scores thatare generated by each matcher and119898 represents the numberof matchers that are presented to the face matching processConsider that the labels 1205960 and 1205961 are two different classesthat are referred to as the genuine class and the imposterclass respectively We can assign ms to either the label 1205960 orthe label 1205961 based on the class-conditional probability Theprobability of error can be minimized by applying Bayesiandecision theory [29] as follows

Assign ms 997888rarr 120596119894 if

119875 (120596119894 | ms) gt 119875 (120596119895 | ms)

for 119894 = 119895 119894 119895 = 0 1

(1)

The posterior probability 119875(120596119894 | ms) can be derived fromthe class-conditional density function 119901(ms | 120596119894) using Bayesformula as follows

119875 (120596119894 | ms) = 119901 (ms | 120596119894) 119875 (120596119894)119901 (ms) (2)

Therefore 119875(120596119894) is the priori probability of the class label 120596119894and 119901(ms) denotes the probability of encountering msThus(1) can be rewritten as follows

Assign ms 997888rarr 120596119894 if

LR gt 120591 where LR = 119901 (ms | 120596119894)119901 (ms | 120596119895)

119894 = 119895 119894 119895 = 0 1

(3)

The ratio LR is known as the likelihood ratio and 120591 is thepredefined threshold The class-conditional density 119901(ms |120596119894) can be determined from the training match score vectorsusing either parametric or nonparametric techniques How-ever the class-conditional probability density function can beextended to ldquosumrdquo and ldquomaxrdquo fusion rules The max fusioncan be extended as follows

119901 (ms | 120596119894) = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

(119901 (ms119895 | 120596119894)) (4)

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

4 Journal of Sensors

PaaS software tools

SaaS applications

IaaS infrastructures

StorageDB

Face image

Eigenface

SIFT points

Matching proximity

Enrollmentrecognition

Enrollmentrecognition

Enrollmentrecognition

Cloud

Figure 1 Cloud-enabled face recognition infrastructures

of the original image (they are not the eigenfaces) byvarying principal components from one to six at onedistance unit

(d) From each instance representation SIFT points [1718] are extracted in the scale-space to form anencoded feature vector of keypoint descriptors (119870119894)because keypoint descriptors other than spatial loca-tion scale and orientation are considered featurepoints

(e) SIFT interest points that are extracted from the sixdifferent face instances (npc 1 2 3 4 5 and 6) ofa target face form six different feature vectors Theyare employed to separately match the feature vectorsthat are obtained from a probe face npc refers to thenumber of principal components

(f) In this step matching proximities are determinedfrom the different matching modules and are subse-quently fused using ldquosumrdquo and ldquomaxrdquo fusion rules adecision is made based on the fused matching scores

(g) To enhance the performance and reduce the com-putational complexity we exploit a heuristic-basedcohort selection method during matching and applya T-norm normalization technique to normalize thecohort scores

3 Brief Review of SIFT Descriptor and PCA

In this section the scale-invariant feature transform (SIFT)and principal component analysis (PCA) are describedBoth the SIFT descriptor and PCA are well-known feature-based and appearance-based techniques that are successfullyemployed in many face recognition systems

The SIFT descriptor [17ndash19] has gained significant atten-tion due to its invariant nature and ability to detect stableinterest points around the extrema It has been proven to

be invariant to rotation scaling a projective transformand partial illumination The SIFT descriptor is robust toimage noise and low-level transformations of images In theproposed approach the SIFT descriptor can reduce the facematching complexity and computation time while detectingstable interest points on a face image SIFT points are detectedvia a four-stage filtering approach namely (a) scale-spacedetection (b) keypoint localization (c) orientation assign-ment and (d) keypoint descriptor computation Howeverkeypoint descriptors are employed to generate feature vectorsfor face matching

The proposed face matching algorithm aims to producemultiple face (six face instances) representations that aredetermined from each column of the face image Theseface instances exhibit distinctive characteristics that aredetermined by reducing the dimensions of features thatcomprise the intensity values Reduced dimensionality isachieved by applying a simple feature reduction techniquewhich is known as principal component analysis (PCA)PCA projects a high-dimensional face image onto a low-dimensional feature space where the face instance of highervariance such as eigenvectors in the observed variables isdetermined Details on PCA are provided in [11 12]

4 Framework and Methodology

Main facial feature to frontal face recognition used in theproposed experiment is a set of 128-dimensional vectors ofsquared patch (region) centred at detected and localizedkeypoints in multiple scale This vector describes the localstructure around keypoints under computed scale A key-point if detected in a uniform region cannot be discrim-inating because scale or rotational change does not makethe point distinguishable from its neighbors SIFT detectedkeypoints [17 18] on frontal face are basically various cornerpoints like corners of lip corners of eye nonuniform contour

Journal of Sensors 5

between nose and cheek and so forth which exhibit intensitychanges in two directions SIFT detected these keypoints byapproximating the Laplacian of Gaussian (LoG) in termsof Difference of Gaussian (DoG) As the Gaussian pyramidbuilds image at various scales SIFT extracted keypointsare scale invariant and the computed descriptors remaindiscriminating from coarse to fine matching The keypointscould have been detected by Harris corner detection (notscale invariant) or Hessian corner detection method butmany of these points detected might not be repeatable underlarge scale change Further 128-dimensional feature pointdescriptor obtained by SIFT feature extraction method isorientation normalized so rotational invariant AdditionallySIFT feature descriptor is normalized to unit length to reducethe effect of contrast Maximum value of each dimension ofthe vector is thresholded to 02 and once again normalizedto make the vector robust to certain range of irregularillumination

The proposed face matching method has been developedwith the concept of varying number of principal components(npc) (npc 1 2 3 4 5 and 6) These variations generate thefollowing improvements whereas the system computes faceinstances for matching and recognition

(a) The projection of a face image onto some faceinstances facilitates construction of independent facematchers which can vary their performance whereasSIFT descriptors are applied for extracting invariantinterest points each instance-based matcher is veri-fied for producing matching proximities

(b) An individual matcher exhibits its strength to recog-nize the faces and numbers of SIFT interest pointswhich are extracted from each face instance and sub-stantially change from one projected face to anotherprojected face as with the effect of varying the numberof principal components

(c) Its performance as a robust system is rectified whenthe individual performances are consolidated into asingle matcher by fusion of matching scores

(d) Let 120576119894 119894 = 1 2 119898 be the 119898 eigenvalues arrangedin descending order Let 120576119894 be associated with eigen-vector 119890119894 (119894th principal eigenface in the face space)Then percentage of variance is accounted for by the119894th principal component = (120576119894sum119898119894=1 120576119894) times 100 Gen-erally first few principal components are sufficient tocapture more than 95 of variances But that numberof components is dependent on the training set ofimage space It varies with the face dataset used Inour experiments we observed that taking as few asonly 6 principal components gives a good result andcaptures the variability which is very close to totalvariability produced during generation of multipleface instancesLet training face dataset contain 119899 instances each ofuniform size 119901 (ℎtimes119908 pixels) then face space contains119901-dimensional 119899 sample points and we can deriveat most 119899 minus 1 eigenvectors but each eigenvector isstill 119901-dimensional (119899 ≪ 119901) Now to compare two

face images each containing 119901 number of pixels (ie119901-dimensional vector) it is required to project eachface image onto each of the 119899 minus 1 eigenvectors (eacheigenvector represents one axis of the new 119899 minus 1dimensional coordinate system) So from each of the119898-dimensional face we derive 119899 minus 1 scalar values bydot product of mean centred image space face witheach of the 119899 minus 1 face space eigenvectors Now in thebackward direction given 119899 minus 1 scalar values we canreconstruct the original face image by the weightedcombination of these eigenfaces and adding meancentred data In this reconstruction process an 119894theigenface contributes more than (119894 + 1)th eigenfaceif they are ordered in decreasing eigenvalues Howaccurate the reconstruction is depends on how manyprincipal component (say 119896 119896 = 1 2 119899minus1) we takeinto consideration Practically it is seen that 119896 neednot be equal to 119899 minus 1 to satisfactorily reconstruct theface After a specific value of 119896 (say 119905) contributionfrom (119905 + 1)th eigenvectors up to (119899 minus 1)th vectoris so negligible that it may be discarded withoutlosing significant information indeed there are somemethods like Keisar criterion (discard eigenvectorscorresponding to eigenvalue less than 1) [11 12] Screetest and so forth Sometimes Keisar method retainstoo many eigenvectors whereas Scree test retains toofew In essence the exact value of 119905 is dataset depen-dent In Figures 4 and 6 it is clearly shown that whenwe continue to add one more principal componentthe capture of variability is increasing rapidly withinfirst 6 components But starting from 6th principalcomponent variance capture is almost flat but doesnot reach the total variability (100 marked line)until last principal component So despite the smallcontribution that principal component 7 onwardshave they cannot be redundant

(e) Distinctive and classified characteristics whichare detected in the reduced low-dimensionalface instance support the integration of localtexture information with local shape distortion andillumination changes of neighboring pixels aroundeach keypoint which comprises a vector of 128elements of invariant nature

The proposed methodology is performed with two dif-ferent perspectives namely the first system is implementedwithout face detection and localization and the secondsystem is focused on implementing a face matcher with alocalized and detected face image

41 Face Matching Type I During the initial stage of facerecognition we enhance the contrast of a face image byapplying a histogram equalization technique We apply thiscontrast enhancement technique to increase the count ofSIFT points that are detected in the local scale-space of aface image The face area is localized and aligned by usingthe algorithm given in [24] In the subsequent steps the facearea is projected onto the low-dimensional feature space andan approximated face instance is formed SIFT keypoints are

6 Journal of Sensors

Figure 2 Six different face instances of a single cropped face imagewith variations of principal components

detected and extracted from this approximated face instanceand a feature vector that consists of interest points is createdIn this experiment six different face instances are generatedby varying the number of principal components from one tosix and they are derived from a single face image

PCA-characterized face instances are shown in Figure 2they are arranged according to the order of the consideredprincipal components The same set of PCA-characterizedface instances is extracted from a probe face and feature vec-tors that consist of SIFT interest points are formed Matchingis performedwith their corresponding face instances in termsof the SIFT points obtained from reference face instancesWe apply a k-nearest neighborhood (119896-NN) approach [26] toestablish correspondence and obtain the number of pairs ofmatching keypoints Figure 3 depicts matching pairs of SIFTkeypoints on two sets of face instances which correspondto reference and probe faces Figure 4 shows the amount ofvariance captured by all principal components because thefirst principal component explains approximately 70 of thevariance we expect that additional components are probablyneeded The first four principal components explain the totalvariability in the face image that is depicted in Figure 2

42 FaceMatching Type II The second type of facematchingstrategy utilizes outliers that are available around the mean-ingful facial area to be recognizedThis type of face matchingexamines the effect of outliers and legitimate features togetherthat are employed for face recognition Outliers may belocated on the forehead above the legitimate and localizedface area around the face area to be considered outside themeaningful area on both the ears and on the head Howeverthe effect of outliers is limited because the legitimate interestpoints are primarily detected in major salient areas whichmay be an efficient analysis because face area localizationcannot be performed or sometimes outliers are an effectiveaddition to the face matching process

In the Type I face matching strategy we employ dimen-sionality reduction and project an entire face onto a low-dimensional feature space using PCA and construct sixdifferent face instances with principal components that varybetween one and sixWe extract SIFT keypoints from sixmul-tiscale face instances and create a set of feature vectors Theface matching task is performed using the 119896-NN approachand matching scores are generated as matching proximitiesfrom a pair of reference and probe facesThematching scoresare passed through the fusion module and consolidated toform an integrated vector of matching proximities Figure 5demonstrates matching between pairs of face instances of anentire face which corresponds to the reference face and probe

face for a certain number of principal components Figure 6shows the amount of variances that were captured by allprincipal components because the first principal componentexplains less than 50 of the variance we expect thatadditional components are needed The first two principalcomponents explain approximately two-thirds of the totalvariability in the face image depicted in Figure 5

5 Fusion of Matching Proximities

51 Baseline Approach to Fusion To fuse [26ndash28] the match-ing proximities that are computed from all matchers (basedon principal components) and form a new vector we applytwo popular fusion rules namely ldquosumrdquo and ldquomaxrdquo [26]Let ms = [ms119894119895] be the match scores that are generatedby multiple matchers (119895) where 119894 = 1 2 119899 and 119895 =1 2 119898 Here 119899 denotes the number of match scores thatare generated by each matcher and119898 represents the numberof matchers that are presented to the face matching processConsider that the labels 1205960 and 1205961 are two different classesthat are referred to as the genuine class and the imposterclass respectively We can assign ms to either the label 1205960 orthe label 1205961 based on the class-conditional probability Theprobability of error can be minimized by applying Bayesiandecision theory [29] as follows

Assign ms 997888rarr 120596119894 if

119875 (120596119894 | ms) gt 119875 (120596119895 | ms)

for 119894 = 119895 119894 119895 = 0 1

(1)

The posterior probability 119875(120596119894 | ms) can be derived fromthe class-conditional density function 119901(ms | 120596119894) using Bayesformula as follows

119875 (120596119894 | ms) = 119901 (ms | 120596119894) 119875 (120596119894)119901 (ms) (2)

Therefore 119875(120596119894) is the priori probability of the class label 120596119894and 119901(ms) denotes the probability of encountering msThus(1) can be rewritten as follows

Assign ms 997888rarr 120596119894 if

LR gt 120591 where LR = 119901 (ms | 120596119894)119901 (ms | 120596119895)

119894 = 119895 119894 119895 = 0 1

(3)

The ratio LR is known as the likelihood ratio and 120591 is thepredefined threshold The class-conditional density 119901(ms |120596119894) can be determined from the training match score vectorsusing either parametric or nonparametric techniques How-ever the class-conditional probability density function can beextended to ldquosumrdquo and ldquomaxrdquo fusion rules The max fusioncan be extended as follows

119901 (ms | 120596119894) = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

(119901 (ms119895 | 120596119894)) (4)

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 5

between nose and cheek and so forth which exhibit intensitychanges in two directions SIFT detected these keypoints byapproximating the Laplacian of Gaussian (LoG) in termsof Difference of Gaussian (DoG) As the Gaussian pyramidbuilds image at various scales SIFT extracted keypointsare scale invariant and the computed descriptors remaindiscriminating from coarse to fine matching The keypointscould have been detected by Harris corner detection (notscale invariant) or Hessian corner detection method butmany of these points detected might not be repeatable underlarge scale change Further 128-dimensional feature pointdescriptor obtained by SIFT feature extraction method isorientation normalized so rotational invariant AdditionallySIFT feature descriptor is normalized to unit length to reducethe effect of contrast Maximum value of each dimension ofthe vector is thresholded to 02 and once again normalizedto make the vector robust to certain range of irregularillumination

The proposed face matching method has been developedwith the concept of varying number of principal components(npc) (npc 1 2 3 4 5 and 6) These variations generate thefollowing improvements whereas the system computes faceinstances for matching and recognition

(a) The projection of a face image onto some faceinstances facilitates construction of independent facematchers which can vary their performance whereasSIFT descriptors are applied for extracting invariantinterest points each instance-based matcher is veri-fied for producing matching proximities

(b) An individual matcher exhibits its strength to recog-nize the faces and numbers of SIFT interest pointswhich are extracted from each face instance and sub-stantially change from one projected face to anotherprojected face as with the effect of varying the numberof principal components

(c) Its performance as a robust system is rectified whenthe individual performances are consolidated into asingle matcher by fusion of matching scores

(d) Let 120576119894 119894 = 1 2 119898 be the 119898 eigenvalues arrangedin descending order Let 120576119894 be associated with eigen-vector 119890119894 (119894th principal eigenface in the face space)Then percentage of variance is accounted for by the119894th principal component = (120576119894sum119898119894=1 120576119894) times 100 Gen-erally first few principal components are sufficient tocapture more than 95 of variances But that numberof components is dependent on the training set ofimage space It varies with the face dataset used Inour experiments we observed that taking as few asonly 6 principal components gives a good result andcaptures the variability which is very close to totalvariability produced during generation of multipleface instancesLet training face dataset contain 119899 instances each ofuniform size 119901 (ℎtimes119908 pixels) then face space contains119901-dimensional 119899 sample points and we can deriveat most 119899 minus 1 eigenvectors but each eigenvector isstill 119901-dimensional (119899 ≪ 119901) Now to compare two

face images each containing 119901 number of pixels (ie119901-dimensional vector) it is required to project eachface image onto each of the 119899 minus 1 eigenvectors (eacheigenvector represents one axis of the new 119899 minus 1dimensional coordinate system) So from each of the119898-dimensional face we derive 119899 minus 1 scalar values bydot product of mean centred image space face witheach of the 119899 minus 1 face space eigenvectors Now in thebackward direction given 119899 minus 1 scalar values we canreconstruct the original face image by the weightedcombination of these eigenfaces and adding meancentred data In this reconstruction process an 119894theigenface contributes more than (119894 + 1)th eigenfaceif they are ordered in decreasing eigenvalues Howaccurate the reconstruction is depends on how manyprincipal component (say 119896 119896 = 1 2 119899minus1) we takeinto consideration Practically it is seen that 119896 neednot be equal to 119899 minus 1 to satisfactorily reconstruct theface After a specific value of 119896 (say 119905) contributionfrom (119905 + 1)th eigenvectors up to (119899 minus 1)th vectoris so negligible that it may be discarded withoutlosing significant information indeed there are somemethods like Keisar criterion (discard eigenvectorscorresponding to eigenvalue less than 1) [11 12] Screetest and so forth Sometimes Keisar method retainstoo many eigenvectors whereas Scree test retains toofew In essence the exact value of 119905 is dataset depen-dent In Figures 4 and 6 it is clearly shown that whenwe continue to add one more principal componentthe capture of variability is increasing rapidly withinfirst 6 components But starting from 6th principalcomponent variance capture is almost flat but doesnot reach the total variability (100 marked line)until last principal component So despite the smallcontribution that principal component 7 onwardshave they cannot be redundant

(e) Distinctive and classified characteristics whichare detected in the reduced low-dimensionalface instance support the integration of localtexture information with local shape distortion andillumination changes of neighboring pixels aroundeach keypoint which comprises a vector of 128elements of invariant nature

The proposed methodology is performed with two dif-ferent perspectives namely the first system is implementedwithout face detection and localization and the secondsystem is focused on implementing a face matcher with alocalized and detected face image

41 Face Matching Type I During the initial stage of facerecognition we enhance the contrast of a face image byapplying a histogram equalization technique We apply thiscontrast enhancement technique to increase the count ofSIFT points that are detected in the local scale-space of aface image The face area is localized and aligned by usingthe algorithm given in [24] In the subsequent steps the facearea is projected onto the low-dimensional feature space andan approximated face instance is formed SIFT keypoints are

6 Journal of Sensors

Figure 2 Six different face instances of a single cropped face imagewith variations of principal components

detected and extracted from this approximated face instanceand a feature vector that consists of interest points is createdIn this experiment six different face instances are generatedby varying the number of principal components from one tosix and they are derived from a single face image

PCA-characterized face instances are shown in Figure 2they are arranged according to the order of the consideredprincipal components The same set of PCA-characterizedface instances is extracted from a probe face and feature vec-tors that consist of SIFT interest points are formed Matchingis performedwith their corresponding face instances in termsof the SIFT points obtained from reference face instancesWe apply a k-nearest neighborhood (119896-NN) approach [26] toestablish correspondence and obtain the number of pairs ofmatching keypoints Figure 3 depicts matching pairs of SIFTkeypoints on two sets of face instances which correspondto reference and probe faces Figure 4 shows the amount ofvariance captured by all principal components because thefirst principal component explains approximately 70 of thevariance we expect that additional components are probablyneeded The first four principal components explain the totalvariability in the face image that is depicted in Figure 2

42 FaceMatching Type II The second type of facematchingstrategy utilizes outliers that are available around the mean-ingful facial area to be recognizedThis type of face matchingexamines the effect of outliers and legitimate features togetherthat are employed for face recognition Outliers may belocated on the forehead above the legitimate and localizedface area around the face area to be considered outside themeaningful area on both the ears and on the head Howeverthe effect of outliers is limited because the legitimate interestpoints are primarily detected in major salient areas whichmay be an efficient analysis because face area localizationcannot be performed or sometimes outliers are an effectiveaddition to the face matching process

In the Type I face matching strategy we employ dimen-sionality reduction and project an entire face onto a low-dimensional feature space using PCA and construct sixdifferent face instances with principal components that varybetween one and sixWe extract SIFT keypoints from sixmul-tiscale face instances and create a set of feature vectors Theface matching task is performed using the 119896-NN approachand matching scores are generated as matching proximitiesfrom a pair of reference and probe facesThematching scoresare passed through the fusion module and consolidated toform an integrated vector of matching proximities Figure 5demonstrates matching between pairs of face instances of anentire face which corresponds to the reference face and probe

face for a certain number of principal components Figure 6shows the amount of variances that were captured by allprincipal components because the first principal componentexplains less than 50 of the variance we expect thatadditional components are needed The first two principalcomponents explain approximately two-thirds of the totalvariability in the face image depicted in Figure 5

5 Fusion of Matching Proximities

51 Baseline Approach to Fusion To fuse [26ndash28] the match-ing proximities that are computed from all matchers (basedon principal components) and form a new vector we applytwo popular fusion rules namely ldquosumrdquo and ldquomaxrdquo [26]Let ms = [ms119894119895] be the match scores that are generatedby multiple matchers (119895) where 119894 = 1 2 119899 and 119895 =1 2 119898 Here 119899 denotes the number of match scores thatare generated by each matcher and119898 represents the numberof matchers that are presented to the face matching processConsider that the labels 1205960 and 1205961 are two different classesthat are referred to as the genuine class and the imposterclass respectively We can assign ms to either the label 1205960 orthe label 1205961 based on the class-conditional probability Theprobability of error can be minimized by applying Bayesiandecision theory [29] as follows

Assign ms 997888rarr 120596119894 if

119875 (120596119894 | ms) gt 119875 (120596119895 | ms)

for 119894 = 119895 119894 119895 = 0 1

(1)

The posterior probability 119875(120596119894 | ms) can be derived fromthe class-conditional density function 119901(ms | 120596119894) using Bayesformula as follows

119875 (120596119894 | ms) = 119901 (ms | 120596119894) 119875 (120596119894)119901 (ms) (2)

Therefore 119875(120596119894) is the priori probability of the class label 120596119894and 119901(ms) denotes the probability of encountering msThus(1) can be rewritten as follows

Assign ms 997888rarr 120596119894 if

LR gt 120591 where LR = 119901 (ms | 120596119894)119901 (ms | 120596119895)

119894 = 119895 119894 119895 = 0 1

(3)

The ratio LR is known as the likelihood ratio and 120591 is thepredefined threshold The class-conditional density 119901(ms |120596119894) can be determined from the training match score vectorsusing either parametric or nonparametric techniques How-ever the class-conditional probability density function can beextended to ldquosumrdquo and ldquomaxrdquo fusion rules The max fusioncan be extended as follows

119901 (ms | 120596119894) = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

(119901 (ms119895 | 120596119894)) (4)

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

6 Journal of Sensors

Figure 2 Six different face instances of a single cropped face imagewith variations of principal components

detected and extracted from this approximated face instanceand a feature vector that consists of interest points is createdIn this experiment six different face instances are generatedby varying the number of principal components from one tosix and they are derived from a single face image

PCA-characterized face instances are shown in Figure 2they are arranged according to the order of the consideredprincipal components The same set of PCA-characterizedface instances is extracted from a probe face and feature vec-tors that consist of SIFT interest points are formed Matchingis performedwith their corresponding face instances in termsof the SIFT points obtained from reference face instancesWe apply a k-nearest neighborhood (119896-NN) approach [26] toestablish correspondence and obtain the number of pairs ofmatching keypoints Figure 3 depicts matching pairs of SIFTkeypoints on two sets of face instances which correspondto reference and probe faces Figure 4 shows the amount ofvariance captured by all principal components because thefirst principal component explains approximately 70 of thevariance we expect that additional components are probablyneeded The first four principal components explain the totalvariability in the face image that is depicted in Figure 2

42 FaceMatching Type II The second type of facematchingstrategy utilizes outliers that are available around the mean-ingful facial area to be recognizedThis type of face matchingexamines the effect of outliers and legitimate features togetherthat are employed for face recognition Outliers may belocated on the forehead above the legitimate and localizedface area around the face area to be considered outside themeaningful area on both the ears and on the head Howeverthe effect of outliers is limited because the legitimate interestpoints are primarily detected in major salient areas whichmay be an efficient analysis because face area localizationcannot be performed or sometimes outliers are an effectiveaddition to the face matching process

In the Type I face matching strategy we employ dimen-sionality reduction and project an entire face onto a low-dimensional feature space using PCA and construct sixdifferent face instances with principal components that varybetween one and sixWe extract SIFT keypoints from sixmul-tiscale face instances and create a set of feature vectors Theface matching task is performed using the 119896-NN approachand matching scores are generated as matching proximitiesfrom a pair of reference and probe facesThematching scoresare passed through the fusion module and consolidated toform an integrated vector of matching proximities Figure 5demonstrates matching between pairs of face instances of anentire face which corresponds to the reference face and probe

face for a certain number of principal components Figure 6shows the amount of variances that were captured by allprincipal components because the first principal componentexplains less than 50 of the variance we expect thatadditional components are needed The first two principalcomponents explain approximately two-thirds of the totalvariability in the face image depicted in Figure 5

5 Fusion of Matching Proximities

51 Baseline Approach to Fusion To fuse [26ndash28] the match-ing proximities that are computed from all matchers (basedon principal components) and form a new vector we applytwo popular fusion rules namely ldquosumrdquo and ldquomaxrdquo [26]Let ms = [ms119894119895] be the match scores that are generatedby multiple matchers (119895) where 119894 = 1 2 119899 and 119895 =1 2 119898 Here 119899 denotes the number of match scores thatare generated by each matcher and119898 represents the numberof matchers that are presented to the face matching processConsider that the labels 1205960 and 1205961 are two different classesthat are referred to as the genuine class and the imposterclass respectively We can assign ms to either the label 1205960 orthe label 1205961 based on the class-conditional probability Theprobability of error can be minimized by applying Bayesiandecision theory [29] as follows

Assign ms 997888rarr 120596119894 if

119875 (120596119894 | ms) gt 119875 (120596119895 | ms)

for 119894 = 119895 119894 119895 = 0 1

(1)

The posterior probability 119875(120596119894 | ms) can be derived fromthe class-conditional density function 119901(ms | 120596119894) using Bayesformula as follows

119875 (120596119894 | ms) = 119901 (ms | 120596119894) 119875 (120596119894)119901 (ms) (2)

Therefore 119875(120596119894) is the priori probability of the class label 120596119894and 119901(ms) denotes the probability of encountering msThus(1) can be rewritten as follows

Assign ms 997888rarr 120596119894 if

LR gt 120591 where LR = 119901 (ms | 120596119894)119901 (ms | 120596119895)

119894 = 119895 119894 119895 = 0 1

(3)

The ratio LR is known as the likelihood ratio and 120591 is thepredefined threshold The class-conditional density 119901(ms |120596119894) can be determined from the training match score vectorsusing either parametric or nonparametric techniques How-ever the class-conditional probability density function can beextended to ldquosumrdquo and ldquomaxrdquo fusion rules The max fusioncan be extended as follows

119901 (ms | 120596119894) = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

(119901 (ms119895 | 120596119894)) (4)

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 7

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

50 100 150

50

100

150

200

250

Figure 3 Matching pairs of face instances with variations of principal components for a pair of corresponding reference and probe faceimages

1 2 3 4 50

102030405060708090

100

Principal component

Varia

nce e

xpla

ined

()

0102030405060708090100

()

Figure 4 Amount of variance accounted by each principal compo-nent of the face image as depicted in Figure 2

Here we replace the joint-density function by maximizingthe marginal density The marginal density 119901(ms119895 | 120596119894) for119895 = 1 2 119898 and 119894 = 0 1 (119894 refers to either a genuine sampleor an imposter sample) can be estimated from the trainingvectors of the genuine and imposter scores that correspondto each119898matcher Therefore we can rewrite (4) as follows

FS120596119894=0max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=0)

FS120596119894=1max = max⏟⏟⏟⏟⏟⏟⏟119895=12119898119894=01

119901 (ms119895 | 120596119894=1) (5)

FSmax denotes the fused match scores that are obtained byfusing the 119898 matchers in terms of exploiting the maximumscores

We can easily extend the ldquomaxrdquo fusion rule to theldquosumrdquo fusion rule by assuming that the posteriori probabilitydoes not significantly deviate from the priori probability

Therefore we can write an equation for fusing the marginaldensities that are known as the ldquosumrdquo rule as follows

FS120596119894=0sum =119898

sum119895=1

119901 (ms119895 | 120596119894=0)

FS120596119894=1sum =119898

sum119895=1

119901 (ms119895 | 120596119894=1) (6)

Independently we apply ldquomaxrdquo and ldquosumrdquo fusion rulesto the genuine and imposter scores that correspond to eachof six matchers which are determined from six different faceinstances for which the principal components vary from oneto six Prior to the fusion of the matching proximities thatare produced by multiple matchers the proximities need tobe normalized and the data needs to be mapped to the rangeof [0-1] In this case we use the min-max normalizationtechnique [26] to map the proximities to the specified rangeand the T-norm cohort selection technique [30 31] is appliedto improve the performance To generatematching scores weapply the 119896-NN approach [32]

52 Cohort Selection Approach to Fusion Recent studiessuggest that cohort selection [30 31] and cohort-basedscore normalization [30] can exhibit robust performance andincrease the robustness of biometric systems To understandthe usability of cohort selection a cohort pool is consideredA cohort pool is a set of matching scores obtained fromnonmatch templates in the database whereas a probe sampleis matched with the reference samples in the database Thematching process generates matching scores among thisset of scores of the corresponding reference samples onetemplate is identified as the claimed identity This claimedidentity is the true match to the probe sample and thematching proximity is significant In addition to the claimedidentity the remainder of the matched scores are known ascohort scores We refer to a match score as a true matchingproximity which is determined from the claimed identityHowever the cohort scores and the score that is determinedfrom the claimed identity exhibit similar degradation Toimprove the performance of the proposed system we needto normalize the true matching proximity using the cohort

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

8 Journal of Sensors

20 40 60 80 100

120

50

100

150

200

250

30020 40 60 80 10

012

0

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

20 40 60 80 100

120

50

100

150

200

250

300

Figure 5 Face matching strategy which shows matching between pairs of face instances with outliers

1 2 3 4 5 60

10

20

30

40

50

60

70

80

90

Principal component

Varia

nce e

xpla

ined

()

010

20

30

40

50

60

70

80

90(

)

Figure 6 Amount of variance accounted by each principal compo-nent of the face image depicted in Figure 5

scores We can apply simple statistics such as the meanstandard deviation and variance to compute the normalizedscore of the true reference template using the T-norm cohortnormalization technique We assume that ldquomost similarcohort scoresrdquo and ldquomost dissimilar cohort scoresrdquo can con-tribute to computation of the normalized scores which havemore discriminatory information than the normal matchingscore As a result the number of false rejection rates maydecrease and the system can successfully identify a subjectfrom a pool of reference templates

Two types of probe samples exist genuine probe samplesand imposter probe samples When a genuine probe faceis compared with cohort models the best-matched cohortmodel and the few models among the remaining cohortmodels are expected to be very similar due to the simi-larity among the corresponding faces The matching of agenuine probe face with the true cohort model and with theremaining cohort models producedmatching scores with thelowest similarity when the true matched template and theremaining of the templates in the database are dissimilarThecomparison of an imposter face with the reference templatesin the database can generate matching scores which areindependent of the set of cohort models

Although cohort-based score normalization is consideredextra overhead to the proposed system it can improveperformance Computational complexity will increase ifthe number of comparisons exceeds the number of cohortmodels To reduce the overhead of an integrating cohortmodel we need to select a subset of cohort models whichcontains the majority of discriminating information and wecombine this cohort subset with the true match score toobtain a normalized score This cohort subset is known asan ldquoordered cohort subsetrdquo which contains the majority ofdiscriminatory information We can select a cohort subsetfor each true match template in the database to normalizeeach true match score when we have a number of probefaces to compare In this context we propose a novelcohort subset selection method that utilizes heuristic cohortselection statistics Because the cohort selection strategy issubstantially inspired by heuristic-based 119879-statistics and abaseline heuristic approach we refer to this method of hybridheuristics statistics in which two-stage filtering is performedto generate the majority of discriminating cohort scores

521 Methodology Hybrid Heuristics Cohort The proposedstatistics begin with a cohort score set 120594119894 = 1199091198941 1199091198942 119909119894119899where 119894 = genuine scores imposter scores and 119899 is thenumber of cohort scores that consist of the genuine andimposter scores presented in the set 120594 Therefore each scoreis labeled with 119909119894119895 isin genuine scores imposter scores and 119895 isin1 2 119899 sample scores From the cohort scores set we cancalculate the mean and standard deviation for the class labelsgenuine and imposter scores Let 120583genuine and 120583imposter be themean values and let 120575genuine and 120575imposter be the standarddeviations for both class labels Using 119879-statistics [33] wecan determine a set of correlation scores that correspond tocohort scores

119879 (119909119895) =10038161003816100381610038161003816120583

genuine minus 120583imposter10038161003816100381610038161003816radic(120575genuine119895 ) 119899genuine + (120575imposter

119895 ) 119899imposter (7)

In (7) 119899genuine and 119899imposter are the number of cohort scoreswhich are labeled as genuine and imposter respectively Wecalculate all correlation scores and make a list of all 119879(119909119895)

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 9

scores Then we construct a search space that includes thesecorrelation scores

Because (7) exhibits a correlation between cohort scoresit can be extended to the baseline heuristics approach in thesecond stage of the hybrid heuristic-based cohort selectionmethod The objective of the proposed cohort selectionmethod is to select cohort scores that correspond to thetwo subsets of highest and lowest correlation scores obtainedby (7) These two subsets of cohort scores constitute thecohort subset We separately collect the correlation scores ina FRINGE data structure or OPEN list with this initial scorewe expand the fringe by adding more correlation scores tothe fringe We also maintain another list which we refer toas the CLOSED list After calculating the first 119879(119909119895) score inthe fringe we omit this score from the fringe and expandthis score The next two correlation scores from the searchspace are removed but retained in the fringe The correlationscore which was removed from the fringe is added to theCLOSED list Because the fringe contains two scores wearrange them in decreasing order in the fringe by sortingthem and removing the maximum score from the fringeThis maximum score is now added to the CLOSED listand maintained in nonincreasing order with other scores inthe CLOSED list We repeat this recursive process in eachiteration until the search space is empty After expandingthe search space by moving all correlation scores from thefringe to the CLOSED list we construct a sorted list Thesesorted scores in the CLOSED list are divided into three partsthe first part and last part are merged to create a single listof correlation scores that exhibit the most discriminatingfeatures We establish a cohort subset by determining themost promising cohort scores which correspond to thecorrelation scores on the CLOSED list

To normalize the cohort scores in the cohort subsetwe apply the T-norm cohort normalization technique T-norm describes the property that indicates that the scoredistribution of each subject class follows a Gaussian distri-bution These normalized scores are employed for makingdecisions and assigning the probe face to one of the twoclass labels Prior to making any decisions we consolidatethe normalized scores for six different facemodels dependingon the principal components to be considered which rangebetween one and six

6 Experimental Evaluation

The rigorous evaluation of the proposed cloud-enabled facematching technique is conducted with two well-known facedatabases namely FEI [34] and BioID [35] The face imagesare presented in the databases with changes in illuminationnonuniform and uniform backgrounds and facial expres-sions For experiments we have set up a simple protocolof face pair matching and apply two different fusion rulesnamely max fusion rules and sum fusion rules Howeverwe have implemented the proposed method by consideringtwo perspectives the Type II perspective indicates that facerecognition employs face images which are provided in thedatabases without being cropped and the Type I perspective

Figure 7 Face images from BioID database

indicates that the face recognition technique uses a manuallylocalized face area after cropping the face images and fixingthe size of the face area to 140 times 140 pixels The faces ofthese two face databases are presented with a variety ofbackgrounds therefore a uniform and robust frameworkshould be designed to examine the proposed face matchingtechniques

61 Databases

611 BioIDFaceDatabase Theface images that are presentedin the BioID [35] database are recorded in a degradedenvironment and are primarily employed for face detectionHowever we can also utilize this database for face recog-nition Because the faces are captured with a variety ofbackground information and illumination the evaluation ofthis database is challenging Here we analyze the Type I andType II face evaluation frameworks The database consists of1521 frontal view face images which were obtained from 23persons all faces are gray level images with a resolution of384times 286 pixels Sample face images from the BioID databaseare shown in Figure 7 The face images are acquired againsta variety of backgrounds facial expressions head positionchanges and illumination changes

612 FEI Face Database The FEI database [34] is a Brazilianface database of images taken between June 2005 and March2006 The database consists of 2800 face images of 200people who each contributed 14 face images The faces arecaptured against a white homogeneous background in anupright frontal position and all images have scale changesof approximately 10 Of the 2800 face images the num-ber of male contributors is equivalent to the number offemale contributors that is 100 male participants and 100female participants have contributed an equal number offace images which total 1400 face images All images arecolorful and the size of each image is 640 times 480 pixels Faceimages from the FEI database are shown in Figure 8 Facesare acquired against uniform illumination and homogeneousbackgrounds with neutral and smiling facial expressionsThedatabase contains faces of varying ages from 18 years to 60years

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

10 Journal of Sensors

Figure 8 Face images from FEI database

62 Experimental Protocol In this experiment we havedeveloped a uniform framework for examining the pro-posed face matching technique with the established viabil-ity constraints We assume that all classifiers are mutuallyrandom processes Therefore to address the biases of eachrandom process we perform an evaluation with a randomdistribution of the number of training samples and probesamples However distribution is completely dependent onthe database that is employed for the evaluation Because theBioID face database contains 1521 faces of 23 individuals weequally distribute the face images among the training and testsetsThe faces that are contributed by each person are dividedinto two sets namely the training set and the testprobe setThe FEI database contains 2800 face images of 200 peopleWedevise a protocol for all databases as follows

Consider that each person has contributed 119898 number offaces and the database size is 119901 (119901 denotes the total numberof face images) We consider that 119902 denotes the total numberof subjectsindividuals who contributed 119898 number of faceimages To extend this protocol we divide 119898 into two equalgroups of face images 1198982 and retain each group for thetrainingreference and probe sets To obtain the genuineand imposter match scores each face in the training set iscompared with 1198982 number of faces in the probe set whichcorresponds to a subject and each single face is comparedwith the face images of the remaining subjects Thus weobtain genuine match scores of 119902 times (1198982) dimension andimposter scores of 119902 times (119902 minus 1) times 119898 dimension The 119896-NN (119896-nearest neighbor approach) metric is employed togenerate commonmatching points between a pair of two faceimages and we employ a min-max normalization techniqueto normalize thematch scores andmap the scores to the rangeof [0-1] In this manner two sets of match scores of unequaldimensions which correspond to a matcher are obtainedwhereas the face images are compared within intraclass (120596119894)sets and interclass (120596119895) sets which we refer to as genuine andimposter score sets

63 Experimental Results and Analysis

631 On FEI Database The proposed cloud-enabled facerecognition system has been evaluated using the FEI face

1 2 3 4 5 6

0

1

2

3

4

5

6

7

Equa

l err

or ra

te (

)

Number of principal components

Figure 9 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

database which contains neutral and smiling expressions offaces The neutral faces are utilized as targettraining facesand the smiling faces are used as probe faces We performseveral experiments to analyze the effect of (a) a cloud-enabled environment (b) face matching without extractinga face area (c) face matching with extracting a face area (d)face matching by projecting the face onto a low-dimensionalfeature space using PCA with varying principal components(one to six) in the conditions mentioned in (a) (b) and (c)and (e) using hybrid heuristic statistics for the cohort subsetselection We depict the experimental results as ROC curvesand a boxplot The ROC curves reveal the performance ofthe proposed system by GAR versus FAR curves for varyingprincipal componentsThe boxplot shows how EER varies fordifferent values of principal components

Figure 9 shows a boxplot when a face with face area hasnot been extracted and Figure 12 shows another boxplotwhen the localized face has been extracted for recognitionIn the boxplot in Figure 9 the EER exceeds 7 when theprincipal component is set to one and the EERvaries between0 and 1 for the remaining principal components In thesecond boxplot a maximum EER of 10 is attained when theprincipal component is 1 and the EER varies between 0 and25 However an EER of 05 is determined for the princi-pal components 2 3 4 and 5 after the face area is extractedfor recognition A maximum EER of 1 is attained for theprincipal components 3 4 5 and 6 when face localizationwas not performed As shown in Table 1 face recognitionperformance deteriorates only for the case when the principalcomponent is set to one For the remaining cases when theprincipal component varies between two and six howeverlowEERs are obtained amaximumEER is obtainedwhen theprincipal component is set to four However the ROC curvesin Figure 10 exhibit higher recognition accuracies for all caseswith the exception of the case when the principal componentis set to 1 ROC curves are shown when face images arenot localized Thus we decouple the information about the

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 11

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 10 Receiver operating characteristics (ROC) curve of GARversus FAR is shown when faces are not cropped and the number ofprincipal components varies between one and six

Table 1 EER and recognition accuracies of the proposed facematching strategy on FEI database when face localization is notperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 707 9293Principal component 2 05 995Principal component 3 097 9903Principal component 4 101 9899Principal component 5 1 99Principal component 6 1 99

legitimate face area with explicit information about nonfaceareas such as the forehead hair ear and chin areas Theseareas may provide crucial information which is consideredadditional information in a decoupled feature vector Thesefeature points contribute to recognizing faces with varyingprincipal components Figure 11 shows recognition accuraciesfor different values of principal component determined onFEI database while face localization does not perform Onthe other hand Figure 14 shows recognition accuracies fordifferent values of principal component determined on FEIdatabase while face localization does perform

Figure 13 shows the ROC curves that are determinedfrom extensive experiments of the proposed algorithm onthe FEI face database when a face image is localized andonly the face part is extracted In this case a maximum EER

9293

995 9903 9899 99 99

88

90

92

94

96

98

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 11 Recognition accuracy versus principal component curvewhen face localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

Equa

l err

or ra

te (

)

Number of principal components

Figure 12 Boxplot of principal components versus EER whenfaces are cropped and the number of principal components variesbetween one and six

of 6 is attained when the principal component is set to1 in other cases the EERs are as low as the EERs in caseswith nonlocalized faces Table 2 depicts the efficacy of theproposed face matching strategy as recognition accuraciesand EERs With the exception of principal component 1 allremaining cases have shown tremendous improvements overthe results listed in Table 1 By integrating feature-based andappearance-based approaches we have made the algorithmrobust not only for facial expressions but also for the areasthat correspond to major salient face regions (both eyesnose and mouth) which have a significant impact on facerecognition performance The ROC curves in Figure 13 alsoshow that the algorithm is inaccurate when the principalcomponents vary between two and six even in the caseof principal component 3 when an EER and recognitionaccuracy of 0 and 100 respectively are attained

Based on two major considerationsmdashwith and withoutface localizationsmdashwe investigate the proposed algorithm byfusing face instances under a varying number of principalcomponents To demonstrate the robustness of the systemwe apply two fusion rulesmdashthe ldquomaxrdquo fusion rule and the

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 12: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

12 Journal of Sensors

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 13 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 2 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 6 94Principal component 2 05 995Principal component 3 0 100Principal component 4 05 995Principal component 5 05 995Principal component 6 1 990

ldquosumrdquo fusion rule However the effect of localizing the facearea and not localizing the face area is investigated andthe conventions with these two fusion rules are integratedFigure 15 shows the ROC curves that are obtained by fusingthe face instances of principal components 1 to 6 withoutperforming face localization and the matching scores aregenerated from a single fused classifier in which all sixclassifiers are fused in terms of matching scores by applyingthe ldquomaxrdquo and ldquosumrdquo fusion rules When we apply the ldquosumrdquofusion rule we obtain 100 recognition accuracy whereas985 accuracy is obtained when the ldquomaxrdquo fusion rule isapplied In this case the ldquosumrdquo fusion rule outperformsthe ldquomaxrdquo fusion rule When a hybrid heuristic statistics-based cohort selection method is applied to fusion rules fora fusion-based classifier the ldquosumrdquo and ldquomaxrdquo fusion rules

94

995 100 995 995 99

919293949596979899

100101

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 14 Recognition accuracy versus principal component curvewhen face localization task is performed

Table 3 The table lists EER and recognition accuracies of theproposed face matching strategy on the FEI database when facelocalization is not performed and principal components are fusedusing the sum and max fusion rules to form a single set of matchingscores In addition the hybrid heuristic statistics-based cohortselection technique is applied which uses T-norm normalizationtechniques for match score normalization

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 15 985

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 05 995

achieve 995 recognition accuracy In the general contextthe proposed cohort selection method degrades recognitionaccuracy by 05 when it is compared with the ldquosumrdquo fusionrule-based classifier without applying the cohort selectionmethod However hybrid heuristic statistics render the facematching algorithm stable and consistent for both the fusionrules (sum max) with 995 recognition accuracy In Table 3the recognition accuracies are shown and in Figure 16 thesame has been exhibited for ldquosumrdquo and ldquomaxrdquo fusion rules

After evaluating the performance of each of the classifiersin which the face instance is determined by setting the valueof the principal component to one value among 1 to 6 andthe fusion of face instances without face localization weevaluate the efficacy of the proposed algorithm by fusing allsix face instances in terms of the matching scores obtainedfrom each classifier In this case the face image is manuallylocalized and the face area is extracted for the recognitiontask Similar to the previous approach we apply two fusionrules to integrate the matching scores namely ldquosumrdquo andldquomaxrdquo fusion rules In addition we exploit the proposedstatistical cohort selection technique which is known ashybrid heuristic statistics For cohort score normalization weapply the T-norm normalization technique This technique

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 13: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 13

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Max fusion ruleSum fusion rule

Figure 15 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between 1 and 6

100

985

995 995

975

98

985

99

995

100

1005

1 2 3 4

Accu

racy

()

Matching strategy

Figure 16 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying max and sum fusion rules with and without applyinghybrid heuristic statistics for the cohort subset selection The firsttwo cases show the recognition accuracies without applying thecohort selectionmethod and the last two cases show the recognitionaccuracies when the cohort selection method is applied Theseaccuracies are obtained without face localization

maps the cohort scores into a normalized score set thatexhibits the characteristics of each of the scores in the cohortsubset and enables the correct match to be rapidly obtainedby the system As shown in Table 4 and Figures 17 and 18when the ldquosumrdquo and ldquomaxrdquo fusion rules are applied with themin-max normalization technique to the fused match scoreswe obtain 100 recognition accuracy In the next step weexploit the hybrid heuristic-based cohort selection techniqueand achieve 100 recognition accuracy in both cases whenthe ldquosumrdquo and ldquomaxrdquo fusion rules are applied The effectof face localization serves a central role in increasing the

Table 4 EER and recognition accuracies of the proposed facematching strategy on the FEI database when face localization isperformed and the principal components are fused using the sumand max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which uses T-norm normalization techniques for matchscore normalization is applied

Fusionrulenormalizationheuristiccohort selection

EER () Recognitionaccuracy ()

Sum fusion rule + min-maxnormalization 00 100

Max fusion rule + min-maxnormalization 00 100

Sum fusion rule + T-norm (hybridheuristic) 00 100

Max fusion rule + T-norm (hybridheuristic) 00 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Figure 17 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule and themax fusion rule when faces are cropped and the number of principalcomponents varies between one and six

recognition accuracy to cent percent in four cases Due to facelocalization however the numbers of feature points that areextracted from face images are determined to be dissimilar tothe nonlocalized face that is obtained by applying the cohortselection method These accuracies are obtained after a faceis localized

632 BioID Database In this section we evaluate the per-formance of the proposed face matching strategy for theBioID face database considering two constraints Consider-ing the first constraint the face matching strategy is appliedwhen faces are not localized and recognition performanceis measured in terms of the number of probe faces thatare successfully recognized However the faces that areprovided in the BioID face database are captured in a variety

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 14: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

14 Journal of Sensors

100 100 100 100

0

20

40

60

80

100

120

1 2 3 4

Accu

racy

()

Matching strategy

Figure 18 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show the recognition accuracies without applyingthe cohort selection method and the last two cases show therecognition accuracies that are obtained by applying the cohortselection method These accuracies are obtained after a face islocalized

1 2 3 4 5 6

0

2

4

6

8

10

12

14

Equa

l err

or ra

tes (

)

Principal components

Figure 19 Boxplot of principal components versus EER when facesare not cropped and the number of principal components variesbetween one and six

of environments and illumination conditions In additionthe faces in the database show various facial expressionsTherefore evaluating the performance of any face matchingis challenging because the positions and locations of frontalview imagesmay be tracked in a variety of environments withchanges in illumination Thus we need a robust techniquethat has the capability of capturing and processing all typesof distinct features and yields encouraging results in theseenvironments and variable lighting conditions Face imagesfrom the BioID database reflect these characteristics with avariety of background information and illumination changes

As shown in Figures 19 and 21 the recognition accuraciessignificantly vary for the principal component range of

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 20 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are not cropped and the number of principalcomponents varies between one and six

Table 5 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localizationis not performed and the number of principal components variesbetween one and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 853 9147Principal component 2 278 9722Principal component 3 85 915Principal component 4 1389 8611Principal component 5 1389 8611Principal component 6 1112 8888

one to six For principal component 2 the proposed facematching paradigm yields a recognition accuracy of 9722which is the highest accuracy achieved when the Type IIconstraint is validated for not localizing the face image AnEER of 278 which is the lowest among all six EERs isobtained For principal components 4 and 5 we achievean EER and recognition accuracy of 1389 and 8611respectively Table 5 lists the EERs and recognition accuraciesfor all six principal components and Figure 21 shows thesame phenomena which depicts a curve with some pointsthat denote the recognition accuracies that correspond toprincipal components between one and six ROC curvesdetermined on BioID database are shown in Figure 20 forunlocalized face

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 15: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 15

9147

9722

915

8611 8611

8888

80

85

90

95

100

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 21 Recognition accuracy versus principal component whenface localization task is not performed

1 2 3 4 5 6

0

1

2

3

4

5

6

7

8

Equa

l err

or ra

te (

)

Principal components

Figure 22 Boxplot of principal components versus EER whenfaces are localized and the number of principal components variesbetween one and six

After considering the Type I constraint we consider theType II constraint in which we obtain the localized face onwhich the proposed algorithm is applied Because the facearea is localized and localization is primarily performed ondegraded face images we may achieve better results with theType I constraint

As shown in Figures 22 and 23 the principal componentvaries between one and six whereas the recognition accuracyvaries with much better results However an EER of 85is obtained for principal component 1 and EERs of 559and 598 are obtained for principal components 4 and 5respectively For the remainder of the principal components(2 3 and 6) we achieve a recognition accuracy of 100in recognizing faces The ROC curves in Figure 23 showthe genuine acceptance rates (GARs) for varying numberof principal components and different false acceptance rates(FARs) The principal components (2 3 and 6) for whichthe recognition accuracy outperforms other componentsyield a recognition accuracy of 100 Figure 24 depicts therecognition accuracies that are obtained for a varying numberof principal components The points on the curve that are

0 10 20 30 40 50 60 70 80 90 10080

82

84

86

88

90

92

94

96

98

100

Gen

uine

acce

ptan

cerate

=100minusFRR

()

False acceptance rate = FAR ()

Number of principal components rarr 1

Number of principal components rarr 2

Number of principal components rarr 3

Number of principal components rarr 4

Number of principal components rarr 5

Number of principal components rarr 6

Figure 23 Receiver operating characteristics (ROC) curve of GARversus FAR when faces are cropped and the number of principalcomponents varies between one and six

Table 6 EER and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the number of principal components varies betweenone and six

Face matching strategies withvarying number of principalcomponents

EER () Recognitionaccuracy ()

Principal component 1 85 915Principal component 2 0 100Principal component 3 0 100Principal component 4 559 9441Principal component 5 598 9402Principal component 6 0 100

marked in red represent the recognition accuracies Table 6shows the recognition accuracies when localized face is used

To validate the Type II constraint for fusions and hybridheuristics-based cohort selection we evaluate the perfor-mance of the proposed technique by analyzing the effect ofa face that is not localized In this experiment however weexploit the same fusion rules namely the ldquosumrdquo and ldquomaxrdquofusion rules as they are introduced to the FEI database Wealso exploit the hybrid heuristics-based cohort selection andthe fusion rules Considering the Type II constraint resultswhich are obtained on the BioID database is satisfactorywhen the fusion rules and cohort selection technique areapplied to nonlocalized faces As shown in Table 7 andfrom Figure 25 it has been observed that for the firsttwo types of matching strategies in which the ldquosumrdquo and

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 16: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

16 Journal of Sensors

915

100 100

9441 9402

100

86889092949698

100102

1 2 3 4 5 6

Accu

racy

()

Principal component

Figure 24 Recognition accuracy versus principal component whenface localization task is performed

Table 7 EER and recognition accuracies of the proposed facematching strategy for the BioID face database when face localizationis performed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition a hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 555 9445

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 05 995

Max fusion rule + T-norm (hybridheuristic) 0 100

ldquomaxrdquo fusion rules are applied to fuse the six classifierswe can achieve a 9445 recognition accuracy and a 100recognition accuracy respectively whereas EERs of 555and 0 respectively are obtained In this case the ldquomaxrdquofusion rule outperforms the ldquosumrdquo fusion rule which isfurther illustrated in the ROC curves shown in Figure 26 Weachieve 995 and 100 recognition accuracies for the nexttwomatching strategies when hybrid heuristics-based cohortselection is applied with the ldquosumrdquo and ldquomaxrdquo fusion rules

In the last segment of the experiment we observe theperformance to bemeasured in terms of recognition accuracyand EER when faces are localized However the matchingparadigms that are listed in Table 7 have also been verifiedagainst the Type II constraint As shown in Table 8 and Fig-ure 27 the first two matching techniques which employ theldquosumrdquo and ldquomaxrdquo fusion rules attained an accuracy of 100whereas cohort-based matching techniques show abruptperformance by achieving 9935 and 100 recognitionaccuracies However a minimal change in accuracy of 015for the combination of the ldquosumrdquo fusion rule and hybridheuristics is unreasonable and the remaining combination ofthe ldquomaxrdquo fusion rule and the hybrid heuristics-based cohortselection method achieved an accuracy of 100 Thereforewe conclude that the ldquomaxrdquo fusion rule outperforms the

9445

100 995 100

919293949596979899

100101

1 2 3 4

Accu

racy

()

Matching strategy

Figure 25 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying themax and sum fusion rules with andwithout applyinghybrid heuristic statistics to the cohort subset selection The firsttwo cases show recognition accuracies without applying the cohortselectionmethod and the last two cases show recognition accuraciesthat are obtained by applying the cohort selection method Theseaccuracies are obtained when a face is not localized

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Sum fusion ruleMax fusion rule

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Figure 26 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely the sum fusion rule andthe max fusion rule when faces are not cropped and the number ofprincipal components varies between one and six

ldquosumrdquo rule which is attributed to a change in the producedcohort subset for both types of constraints (Type I and TypeII) In Figure 28 the recognition accuracies are plotted againstthe matching strategies and the accuracy points are markedin blue

It would be interesting to see how the current ensembleframework would be useful for face recognition in thewild while faces are found in unrestricted conditions Facerecognition in the wild is challenging due to its nature offace acquisition method in unconstrained environment Allthe face images are not frontal Images of the same subjectmay vary in pose profile occlusion multiple backgroundface color and so forth As our framework is based on only

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 17: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 17

Table 8 EERs and recognition accuracies of the proposed facematching strategy for the BioID database when face localization isperformed and the principal components are fused using the sumand the max fusion rules to form a single set of matching scoresIn addition the hybrid heuristic statistics-based cohort selectiontechnique which employs T-norm normalization techniques formatch score normalization is applied

Fusion rulenormalizationheuristiccohort selection EER () Recognition

accuracy ()Sum fusion rule + min-maxnormalization 0 100

Max fusion rule + min-maxnormalization 0 100

Sum fusion rule + T-norm (hybridheuristic) 065 9935

Max fusion rule + T-norm (hybridheuristic) 0 100

0 10 20 30 40 50 60 70 80 90 1000

10

20

30

40

50

60

70

80

90

100

False acceptance rate = FAR ()

Gen

uine

acce

ptan

ce ra

te=

1minusFR

R (

)

Sum fusion ruleMax fusion rule

Figure 27 Receiver operating characteristics (ROC) curve of GARversus FAR for two fusion rules namely sum fusion rule and maxfusion rule when faces are cropped and the number of principalcomponents varies between one and six

first six principal components and SIFT features it requiresincorporation of various tools Tools to detect and crop faceregion discard background images as far as possible anddetected face regions are to be upsampled or downsampledto make uniform dimensional face image vector to applyPCA Tools to estimate pose and then apply 2D frontalizationto compensate variance lead to less number of principalcomponents to consider

64 Comparison with Other Face Recognition Systems Thissection reports a comparative study of the experimentalresults of the proposed cloud-enabled face recognition pro-tomodel with other well-known face recognition modelsThese models include some cloud computing-based face

100 100

9935

100

1 2 3 4Matching strategy

99

992

994

996

998

100

1002

Accu

racy

()

Figure 28 Curve of recognition accuracy versus face matchingstrategy when face instances are fused in terms of matching scoresby applying the maximum and sum fusion rules with and withoutapplying hybrid heuristic statistics for the cohort subset selectionThe first two cases show recognition accuracies without applying thecohort selection method and the last two cases show recognitionaccuracies that are obtained by applying the cohort selectionmethod These accuracies are obtained after a face is localized

recognition algorithms which are limited in number andsome traditional face recognition systems which are notenabled with cloud computing infrastructures Two differ-ent perspectives are considered for which comparisons areperformed The first perspective applies the concept of cloudcomputing facilities which are integratedwith a face recogni-tion system whereas the second perspective employs similarface databases to compare with other methods Howeverthe second perspective does not utilize the concept of cloudcomputing To compare the experimental results we comparethe proposed systems with two cloud-based face recognitionsystems the first system utilizes eigenface in cloud vision[36] and the second face recognition system utilizes socialmedia with mobile cloud computing facilities [37] Becausecloud-based face recognition models are limited we pre-sented the results of only these two systems The systemdescribed in [36] employs the ORL face database whichcontains 400 face images of 40 individuals whereas theother system [37] employs a local face database that containsapproximately 50 face images Table 9 shows the performanceof the proposed system and the two systems that are exploitedin [36 37] in terms of recognition accuracy Table 9 also liststhe number of training samples that are employed duringthe matching of face images by different systems Becausethe proposed system utilizes two well-known face databasesnamely BioID and FEI databases and two different facematching paradigms namely Type I and Type II the bestrecognition accuracies are selected for comparisonThe TypeI paradigm refers to the face matching strategy in which faceimages are localized and the Type II paradigm refers to thematching strategy in which face images are not localizedTable 9 also shows the results of two face recognition systems[38 39] that do not employ cloud-enabled infrastructureThe comparative study indicates that the proposed systemoutperforms other methods regardless of whether they arecloud-based systems or do not use cloud infrastructures

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 18: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

18 Journal of Sensors

Table 9 Comparison of the proposed cloud-based face recognition system with the systems described in [36 37]

Method Face database Number of face images Number of distinct subjects Recognition accuracy () Error ()Cloud vision [36] ORL 400 40 9708 292Mobile cloud [37] Local DB 50 5 85 15Facial features [38] BioID 1521 23 9235 7652D-PCA + PSO-SVM [39] BioID 1521 23 9565 435Proposed Type I BioID 1521 23 100 0Proposed Type I FEI 2800 20 100 0Proposed Type II BioID 1521 23 9722 278Proposed Type II FEI 2800 20 995 05Proposed Type I + fusion BioID 1521 23 100 0Proposed Type I + fusion FEI 2800 20 100 0Proposed Type II + fusion BioID 1521 23 100 0Proposed Type II + fusion FEI 2800 20 100 0

7 Time Complexity of Ensemble Network

The time complexity of the proposed ensemble networkquantifies the amount of time taken collectively by differentmodules to run as a set of functions of the length of the inputThe time complexity is estimated by counting the numberof operations performed by different modules or cascadedalgorithms The ensemble network involves a few cascadedalgorithms which together perform face recognition in cloudenvironment and the algorithms are PCA computation SIFTfeatures extraction fromeach face instancesmatching fusionof matching scores using ldquosumrdquo and ldquomaxrdquo fusion rulesand heuristic-based cohort selection In this section timecomplexity of individual modules is computed first and thenoverall complexity of the ensemble network is computed bysumming them together

(a) Time Complexity of PCA Computation For PCA algo-rithm the computation bottleneck is to derive covariancematrix Let 119889 be the number of pixels (height times width)determined from each grayscale face image and let thenumber of face images be 119873 PCA computation has thefollowing steps

(i) Finding mean of119873 sample is119874(119873119889) (119873 addition of 119889dimensional vectors and then summation is dividedby119873)

(ii) As covariance matrix is symmetric deriving onlyupper triangular matrix elements is sufficient So foreach of the 119889(119889 + 1)2 elements 119873 multiplicationsand additions are required leading to 119874(1198731198892) timecomplexity Let the 119889 times 119873 dimensional covariancematrix be119872

(iii) If Karhunen-Loeve (KL) trick is employed theninstead of 119872119872119879 (119889 times 119889 dimension) compute119872119879 119872(119873times119873)which requires119889multiplications and119889additions for each of the1198732 elements hence 119874(1198732119889)time complexity (generally119873 ≪ 119889)

(iv) Eigen decomposition of 119872119879119872 matrix by SVDmethod requires 119874(1198993)

(v) Sorting the eigenvectors (each eigenvector 119889 times 1dimensional) in descending order of the eigenvaluesrequires 119874(1198732) Then taking only first 6 principalcomponent eigenvectors requires constant time

Projecting the probe image vector on each of the eigenfacesrequires a dot product between two vectors resulting in scalarvalue Hence 1198892 multiplication and 1198892 addition for eachprojection lead to119874(1198892) Six such projections require119874(61198892)

(b) Time Complexity of SIFT Keypoints Extraction Let thedimension of each face image be 119872 times 119873 pixels and let theface image be represented in column vector of dimension119889(119872times119873) Gaussian kernel of dimension119908times119908 is used Eachoctave has 119904+3 scales and total 119871 number of octaves has beenused The important phases of SIFT are as follows

(i) Extrema detection

(a) Compute scale

(1) In each scale 1199082 multiplication and 1199082 minus 1addition is done by convolution operationfor each pixel so 119874(1198891199082)

(2) 119904+3 scales are in each octave so119874(1198891199082(119904+3))

(3) 119871 is number of octaves so119874(1198711198891199082(119904 + 3))(4) Overall 119874(1198891199082119904) is found

(b) Compute 119904 + 2 number of Difference of Gaus-sians (DoG) for each of the 119871 number of octaves119874((119904+2)1199091198892119896) 119896 = 21119904 so for119871 octaves119874(119904119889)

(c) Extrema detection 119874(119904119889)

(ii) Keypoint localization after eliminating low contrastpoint and points along edge let120572119889 be number of pixelssurvived so 119874(120572119889119904)

(iii) Orientation assignment 119874(119904119889)

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 19: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 19

(iv) Keypoint descriptor computation if 119901 times 119901 neighbor-hood of the keypoint is considered then 119874(1199012119889) isfound

(c) Time Complexity of Matching Each keypoint is repre-sented by a feature descriptor of 128 elements To compareany two such points by Euclidean distance requires 128subtractions square of each of the previous 128 subtractedelements 127 additions to add them up and one square rootat the end So linear time complexity is 119874(119899) where 119899 =128 Let 119894th eigenface of probe face image and reference faceimage have 1198961 and 1198962 number of survived keypoints So eachkeypoint from 1198961 will be compared to each of 1198962 keypoints byEuclidean distance So 119874(11989611198962) for a single eigenface pair is119874(61198992) for 6 pairs of eigenfaces (119899 = number of keypoints)If there are119872 numbers of reference faces in the gallery thentotal complexity would be 119874(61198721198992)

(d) Time Complexity of FusionAs the six individual matchersdomains are different therefore to bring them into uniformdomain min-max normalization technique is used Foreach normalized value computation has two subtractionoperations andone division operation So it requires constanttime 119874(1) The 119899 pair of probe and gallery images in 119894thprincipal component requires119874(119899) So for six principal com-ponents it requires 119874(6119899) Finally the sum fusion requiresfive summations for each pair of probe and reference face Soit is a constant time 119874(1) Subsequently for 119899 pairs of probeand reference face images it requires 119874(119899)

(e) Time Complexity of Cohort Selection Cohort selectionrequires four operations to be performed computation ofcorrelation in the search space insertion of correlation valuesinto the OPEN list insertion of correlation values at theproper positions into the CLOSED list according to insertionsort and finally division of the CLOSED list of correlationvalues of size 119899 into three disjoint sets First two operationstake constant time and the third operation takes 119874(1198992)complexity as it follows the convention of insertion sortFinally the last operation takes linear time Overall timerequired by cohort selection is 119874(1198992)

Now the overall time complexity (119879) of ensemble net-work would be calculated as follows

119879 = max (max (1198791) +max (1198792) +max (1198793) +max (1198794)

+max (1198795)) = max (119874 (1198993) + 119874 (1198891199082119904)

+ 119874 (1198721198992) + 119874 (119899) + 119874 (1198992)) asymp 119874 (1198993)

(8)

where

1198791 is time complexity of PCA computation1198792 is time complexity of SIFT keypoints extractions1198793 is time complexity of matching1198794 is time complexity of fusion1198795 is time complexity of cohort selection

Therefore the overall time complexity of ensemble networkwould be 119874(1198993) while both enrollment time and verificationtime through cloud network is assumed to be constant

8 Conclusion

In this paper a robust and efficient cloud engine-enabledface recognition system in which cloud infrastructure hasbeen successfully integrated with a face recognition systemhas been proposed The face recognition system utilizes abaseline methodology in which face instances are computedby applying a principal component analysis- (PCA-) basedtexture analysis method to establish six fixed points ofprincipal components which range from one to sixThe SIFToperator is applied to extract a set of invariant points fromeach face instance which correspond to the gallery and probeface images In this methodology two types of constraints areemployed to validate the proposed matching technique theType I constraint and the Type II constraint which denoteface matching with face localization and face matchingwithout face localization respectively The 119896-NN method isemployed to compute a pair of faces and generate matchingpoints asmatching scoresWe investigate and analyze variouseffects on the face recognition system which directly orindirectly attempt to improve the total performance of therecognition system in various dimensions To achieve robustperformance we have analyzed the following effects (a) effectof cloud environment (b) effect of combining a texture-based method and a feature-based method (c) effect of usingmatch score level fusion rules and (d) effect of using a hybridheuristics-based cohort selectionmethod After investigatingthese aspects we have determined that these crucial andnecessary paradigms render the system significantly moreefficient than the baseline methodology whereas remoterecognition is achieved either from a remotely placed com-puter terminal or mobile phones or tablet PCs In additiona cloud-based environment reduces the cost required byorganizations who would like to implement this integratedsystem The experimental results demonstrate high accu-racies and low EERs for the types of paradigms that wepresented In addition the proposed method outperformsother methods

Competing Interests

The authors declare that they have no competing interests

References

[1] S Z Li and A K Jain Eds Handbook of Face RecognitionSpringer Berlin Germany 2nd edition 2011

[2] H Wechsler Reliable Face Recognition Methods System DesignImplementation and Evaluation Springer New York NY USA2007

[3] S Yan H Wang X Tang and T Huang ldquoExploring featuredescriptors for face recognitionrdquo in Proceedings of the IEEEInternational Conference on Acoustics Speech and Signal Pro-cessing (ICASSP rsquo07) pp I629ndashI632 Honolulu Hawaii USAApril 2007

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 20: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

20 Journal of Sensors

[4] M A Carreira-Perpinan ldquoA review of dimension reductiontechniquesrdquo Tech Rep CS-96-09 University of Sheffield 1997

[5] R Buyya C S Yeo S Venugopal J Broberg and I BrandicldquoCloud computing and emerging IT platforms vision hypeand reality for delivering computing as the 5th utilityrdquo FutureGeneration Computer Systems vol 25 no 6 pp 599ndash616 2009

[6] L Heilig and S Vob ldquoA scientometric analysis of cloud com-puting literaturerdquo IEEE Transactions on Cloud Computing vol2 no 3 pp 266ndash278 2014

[7] M Stojmenovic ldquoMobile cloud computing for biometric appli-cationsrdquo in Proceedings of the 15th International Conference onNetwork-Based Information Systems (NBIS rsquo12) pp 654ndash659September 2012

[8] A S Bommagani M C Valenti and A Ross ldquoA frameworkfor secure cloud-empoweredmobile biometricsrdquo in Proceedingsof the 33rd Annual IEEE Military Communications Conference(MILCOM rsquo14) pp 255ndash261 BaltimoreMdUSAOctober 2014

[9] K-S Wong and M H Kim ldquoSecure biometric-based authen-tication for cloud computingrdquo in Proceedings of the 2nd Inter-national Conference on Cloud Computing and Services Science(CLOSER rsquo12) pp 86ndash101 Porto Portugal 2012

[10] M Turk and A Pentland ldquoEigenfaces for recognitionrdquo Journalof Cognitive Neuroscience vol 3 71 no 1 p 86 1991

[11] A Pentland B Moghaddam and T Starner ldquoView-based andmodular eigenspaces for face recognitionrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition pp 84ndash91 Seattle Wash USA June 1994

[12] J Lu K N Plataniotis and A N Venetsanopoulos ldquoFacerecognition using LDA-based algorithmsrdquo IEEE Transactionson Neural Networks vol 14 no 1 pp 195ndash200 2003

[13] Y Wen L He and P Shi ldquoFace recognition using differencevector plus KPCArdquo Digital Signal Processing vol 22 no 1 pp140ndash146 2012

[14] S Chen J Liu and Z-H Zhou ldquoMaking FLDA applicableto face recognition with one sample per personrdquo PatternRecognition vol 37 no 7 pp 1553ndash1555 2004

[15] D R Kisku H Mehrotra P Gupta and J K Sing ldquoRobustmulti-camera view face recognitionrdquo International Journal ofComputers and Applications vol 33 no 3 pp 211ndash219 2011

[16] L Wiskott J-M Fellous N Kruger and C D von derMalsburg ldquoFace recognition by elastic bunch graph matchingrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 19 no 7 pp 775ndash779 1997

[17] D G Lowe ldquoDistinctive image features from scale-invariantkeypointsrdquo International Journal of Computer Vision vol 60 no2 pp 91ndash110 2004

[18] D R Kisku A Rattani E Grosso and M Tistarelli ldquoFaceidentification by SIFT-based complete graph topologyrdquo inProceedings of the IEEE Workshop on Automatic IdentificationAdvanced Technologies (AUTOID rsquo07) pp 63ndash68 Alghero ItalyJune 2007

[19] Z Li U Park and A K Jain ldquoA discriminative model for ageinvariant face recognitionrdquo IEEE Transactions on InformationForensics and Security vol 6 no 3 pp 1028ndash1037 2011

[20] H Bay A Ess T Tuytelaars and L V Gool ldquoSURF speededup robust featuresrdquo Computer Vision and Image Understanding(CVIU) vol 110 no 3 pp 346ndash359 2008

[21] K Yan and R Sukthankar ldquoPCA-SIFT a more distinctiverepresentation for local image descriptorsrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II506ndashII513 July 2004

[22] I A-A Abdul-Jabbar J Tan and Z Hou ldquoAdaptive PCA-SIFT matching approach for face recognition applicationrdquo inProceedings of the International Multi Conference of Engineersand Computer Scientists pp 1ndash5 Hong Kong 2014

[23] R E G Valenzuela W R Schwartz and H Pedrini ldquoDimen-sionality reduction through PCA over SIFT and SURF descrip-torsrdquo in Proceedings of the 11th IEEE International Conferenceon Cybernetic Intelligent Systems (CIS rsquo12) pp 58ndash63 IEEELimerick Ireland August 2012

[24] D Chen S Ren Y Wei X Cao and J Sun ldquoJoint cascade facedetection and alignmentrdquo in Proceedings of the 13th EuropeanConference on Computer Vision pp 109ndash122 Zurich Switzer-land September 2014

[25] R C Gonzalez andAWoodsDigital Image Processing PrenticeHall 2008

[26] R Snelick U Uludag A Mink M Indovina and A JainldquoLarge-scale evaluation ofmultimodal biometric authenticationusing state-of-the-art systemsrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 27 no 3 pp 450ndash4552005

[27] Q D Tran P Kantartzis and P Liatsis ldquoImproving fusionwith optimal weight selection in face recognitionrdquo IntegratedComputer-Aided Engineering vol 19 no 3 pp 229ndash237 2012

[28] N Poh J Kittler and F Alkoot ldquoA discriminative paramet-ric approach to video-based score-level fusion for biometricauthenticationrdquo in Proceedings of the 21st International Confer-ence on Pattern Recognition (ICPR rsquo12) pp 2335ndash2338 TsukubaJapan November 2012

[29] B Abidi and M A Abidi Face Biometrics for Personal Identifi-cationMulti-SensoryMulti-Modal Systems Springer NewYorkNY USA 2007

[30] A Merati N Poh and J Kittler ldquoUser-specific cohort selectionand score normalization for biometric systemsrdquo IEEE Trans-actions on Information Forensics and Security vol 7 no 4 pp1270ndash1277 2012

[31] M Tistarelli Y Sun and N Poh ldquoOn the use of discriminativecohort score normalization for unconstrained face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 9no 12 pp 2063ndash2075 2014

[32] N S Altman ldquoAn introduction to kernel and nearest-neighbornonparametric regressionrdquo The American Statistician vol 46no 3 pp 175ndash185 1992

[33] H Liu J Li and L Wong ldquoA comparative study on featureselection and classification methods using gene expressionprofiles and proteomic patternsrdquo Genome Informatics vol 13pp 51ndash60 2002

[34] C E Thomaz and G A Giraldi ldquoA new ranking method forprincipal components analysis and its application to face imageanalysisrdquo Image and Vision Computing vol 28 no 6 pp 902ndash913 2010

[35] O Jesorsky K J Kirchberg and R W Frischholz ldquoRobustface detection using the hausdorff distancerdquo in Audio- andVideo-Based Biometric Person Authentication Third Interna-tional Conference AVBPA 2001 Halmstad Sweden June 6ndash82001 Proceedings J Bigun and F Smeraldi Eds vol 2091 ofLecture Notes in Computer Science pp 90ndash95 Springer BerlinGermany 2001

[36] M K Suguna M R Shrihari and M R Mahesh ldquoImplemen-tation of face recognition in cloud vision using eigen facesrdquoInternational Journal of Engineering Research and Applicationsvol 4 no 7 pp 151ndash155 2014

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 21: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

Journal of Sensors 21

[37] P Indrawan S Budiyatno N M Ridho and R F Sari ldquoFacerecognition for social media with mobile cloud computingrdquoInternational Journal on Cloud Computing Services and Archi-tecture vol 3 no 1 pp 23ndash35 2013

[38] S K Paul M S Uddin and S Bouakaz ldquoFace recognition usingfacial featuresrdquo SOP Transactions on Signal Processing In press

[39] S Valuvanathorn S Nitsuwat andM L Huang ldquoMulti-featureface recognition based on PSO-SVMrdquo in Proceedings of the2012 10th International Conference on ICT and Knowledge Engi-neering ICT and Knowledge Engineering pp 140ndash145 BangkokThailand November 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 22: Research Article Multithread Face Recognition in Clouddownloads.hindawi.com/journals/js/2016/2575904.pdf · many studies. Among various feature extraction approaches, appearance-based,

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of