8
Research Article Recognition of Reconstructed Frontal Face Images Using FFT- PCA/SVD Algorithm Louis Asiedu , Felix O. Mettle, and Joseph A. Mensah Department of Statistics & Actuarial Science, School of Physical and Mathematical Sciences, University of Ghana, Legon, Accra, Ghana Correspondence should be addressed to Louis Asiedu; [email protected] Received 15 July 2020; Accepted 19 September 2020; Published 5 October 2020 Academic Editor: Jong Hae Kim Copyright © 2020 Louis Asiedu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Face recognition has gained prominence among the various biometric-based methods (such as ngerprint and iris) due to its noninvasive characteristics. Modern face recognition modules/algorithms have been successful in many application areas (access control, entertainment/leisure, security system based on biometric data, and user-friendly human-machine interfaces). In spite of these achievements, the performance of current face recognition algorithms/modules is still inhibited by varying environmental constraints such as occlusions, expressions, varying poses, illumination, and ageing. This study assessed the performance of Principal Component Analysis with singular value decomposition using Fast Fourier Transform (FFT- PCA/SVD) for preprocessing the face recognition algorithm on left and right reconstructed face images. The study found that average recognition rates for the FFT-PCA/SVD algorithm were 95% and 90% when the left and right reconstructed face images are used as test images, respectively. The result of the paired sample t -test revealed that the average recognition distances for the left and right reconstructed face images are not signicantly dierent when FFT-PCA/SVD is used for recognition. FFT- PCA/SVD is recommended as a viable algorithm for recognition of left and right reconstructed face images. 1. Introduction Recognizing people using face images has gained promi- nence among the various biometric-based methods (such as ngerprint and iris) due to its comparative advantage of being nonintrusive and less cooperative (of subjects under study). This task is easily carried out by humans. The design of machine-based face recognition systems (that mimic humansrecognition prowess) and their underlying algo- rithms that give optimal classication or recognition rates, however, have been and continue to be challenging, espe- cially when face images are obtained under uncontrolled environments (poor illumination conditions, varying poses, expressions, and occlusions) [1]. Thus, there is a growing interest in this eld of research. In the case of partially occluded faces (resulting from the wearing of mask and sunglasses, blockage by external objects, captured angle images, etc.), occlusion insensitive, local matching, and reconstruction methods have been used for recognition [2]. Performing face recognition using half-face images can be considered a special case of partially occluded faces where either the left or right face is occluded and seg- mented and the remaining half (nonoccluded) face is used for recognition [3]. Bilateral symmetry is a property of many natural objects including the human face [4]. Leveraging this property, the performances of holistic face representation- based algorithms have been evaluated on the left, right, and average half faces based on symmetry scores [5, 6]. Singh and Nandi [6] applied PCA on the full and left and right half faces and measured the performance of their algo- rithm in terms of the recognition rate and accuracy. Their results showed no dierence in recognition rates between the left and right half faces but achieved higher accuracy for the left half face. They also reported no dierence in accuracy rates between the full face and half faces. However, the recog- nition rate for half faces was half that for the full faces. Harguess and Aggarwal [5] evaluated the recognition rate of full faces and average half faces using eigenfaces and the nearest neighbor as a classier. They reported a signicantly higher recognition rate using the average half face for both Hindawi Journal of Applied Mathematics Volume 2020, Article ID 9127465, 8 pages https://doi.org/10.1155/2020/9127465

Recognition of Reconstructed Frontal Face Images Using FFT ...downloads.hindawi.com/journals/jam/2020/9127465.pdf · PCA/SVD) for preprocessing the face recognition algorithm on left

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

  • Research ArticleRecognition of Reconstructed Frontal Face Images Using FFT-PCA/SVD Algorithm

    Louis Asiedu , Felix O. Mettle, and Joseph A. Mensah

    Department of Statistics & Actuarial Science, School of Physical and Mathematical Sciences, University of Ghana, Legon,Accra, Ghana

    Correspondence should be addressed to Louis Asiedu; [email protected]

    Received 15 July 2020; Accepted 19 September 2020; Published 5 October 2020

    Academic Editor: Jong Hae Kim

    Copyright © 2020 Louis Asiedu et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    Face recognition has gained prominence among the various biometric-based methods (such as fingerprint and iris) due to itsnoninvasive characteristics. Modern face recognition modules/algorithms have been successful in many application areas (accesscontrol, entertainment/leisure, security system based on biometric data, and user-friendly human-machine interfaces). In spiteof these achievements, the performance of current face recognition algorithms/modules is still inhibited by varyingenvironmental constraints such as occlusions, expressions, varying poses, illumination, and ageing. This study assessed theperformance of Principal Component Analysis with singular value decomposition using Fast Fourier Transform (FFT-PCA/SVD) for preprocessing the face recognition algorithm on left and right reconstructed face images. The study found thataverage recognition rates for the FFT-PCA/SVD algorithm were 95% and 90% when the left and right reconstructed face imagesare used as test images, respectively. The result of the paired sample t-test revealed that the average recognition distances for theleft and right reconstructed face images are not significantly different when FFT-PCA/SVD is used for recognition. FFT-PCA/SVD is recommended as a viable algorithm for recognition of left and right reconstructed face images.

    1. Introduction

    Recognizing people using face images has gained promi-nence among the various biometric-based methods (suchas fingerprint and iris) due to its comparative advantage ofbeing nonintrusive and less cooperative (of subjects understudy). This task is easily carried out by humans. The designof machine-based face recognition systems (that mimichumans’ recognition prowess) and their underlying algo-rithms that give optimal classification or recognition rates,however, have been and continue to be challenging, espe-cially when face images are obtained under uncontrolledenvironments (poor illumination conditions, varying poses,expressions, and occlusions) [1]. Thus, there is a growinginterest in this field of research.

    In the case of partially occluded faces (resulting from thewearing of mask and sunglasses, blockage by external objects,captured angle images, etc.), occlusion insensitive, localmatching, and reconstruction methods have been used forrecognition [2]. Performing face recognition using half-face

    images can be considered a special case of partially occludedfaces where either the left or right face is occluded and seg-mented and the remaining half (nonoccluded) face is usedfor recognition [3]. Bilateral symmetry is a property of manynatural objects including the human face [4]. Leveraging thisproperty, the performances of holistic face representation-based algorithms have been evaluated on the left, right, andaverage half faces based on symmetry scores [5, 6].

    Singh and Nandi [6] applied PCA on the full and left andright half faces and measured the performance of their algo-rithm in terms of the recognition rate and accuracy. Theirresults showed no difference in recognition rates betweenthe left and right half faces but achieved higher accuracy forthe left half face. They also reported no difference in accuracyrates between the full face and half faces. However, the recog-nition rate for half faces was half that for the full faces.

    Harguess and Aggarwal [5] evaluated the recognition rateof full faces and average half faces using eigenfaces and thenearest neighbor as a classifier. They reported a significantlyhigher recognition rate using the average half face for both

    HindawiJournal of Applied MathematicsVolume 2020, Article ID 9127465, 8 pageshttps://doi.org/10.1155/2020/9127465

    https://orcid.org/0000-0002-2859-1215https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://doi.org/10.1155/2020/9127465

  • men and women compared to the full face. Asiedu et al. [7]applied the PCA/SVD algorithm on full faces under varyingfacial expressions. They concluded that the algorithm wasmost consistent and efficient under varying expressions andachieved appreciable performance with an average recogni-tion rate of 92.86%.

    Avuglah et al. [8] also applied the FFT-PCA/SVD algo-rithm on face images under angular constraints and statisti-cally evaluated the performance of the algorithm. Theyfound that the algorithm perfectly recognizes head tilts thatare 24° or less. They concluded that the algorithm has anappreciable performance with an average recognition rateof 92.5% in recognition of face images under angular con-straints. The question of whether this algorithm performswell on partial and reconstructed frontal face images, how-ever, has not been explored. This paper, therefore, intendsto assess the performance of the FFT-PCA/SVD algorithmon partial and reconstructed frontal face images based onbilateral symmetry.

    2. Materials and Methods

    2.1. Source of Data. Frontal face images from the Massachu-setts Institute of Technology (MIT) (20032005) and Japanese

    Female Facial Expressions (JAFFE) database were used tobenchmark the face recognition algorithm. The train-imagedatabase contains twenty frontal face images. Ten of theseimages were 0° straight pose from the MIT (2003-2005) data-base, and ten were neutral pose face images from JAFFE. Theimages captured into the train-image database are denoted astrain images and are used to train the algorithm.

    Twenty images reconstructed from the half-face images(created through vertical segmentation) of the train imageswere captured into the test-image database. The images cap-tured into the test-image database are called the test imagesand are used for testing the recognition algorithm.

    To keep the data uniform, captured images were digitizedinto gray-scale precision and resized into 200 × 200 dimen-sions, and the data types were changed into double precisionfor preprocessing. This made the images (matrices) conform-able and enhanced easy computations. Figures 1 and 2 showsubjects in the train image database.

    2.2. Image Reconstruction. The left segmented half-faceimages were reconstructed using the following steps:

    (1) Rotate left segmented half-face image through 270°,and denote it as Nl1

    Figure 1: Subjects in the MIT (2003-2005) database.

    Figure 2: Subjects in the JAFFE database.

    2 Journal of Applied Mathematics

  • (2) Rotate the left segmented half-face image through180°, and denote it as Nl2

    (3) Concatenate Nl1 and Nl2 as

    Tl = Nl1 ∣Nl2½ � ð1Þ

    The right segmented half-face images were reconstructedusing the following steps:

    (1) Rotate right segmented half-face image through 270°,and denote it as Nr1

    (2) Rotate the right segmented half-face image through180°, and denote it as Nr2

    (3) Concatenate Nr2 and Nr1 as

    Tr = Nr2 ∣Nr1½ � ð2Þ

    Figure 3 contains the original full images, left and righthalf-images, and their reconstructed images used in thetest-image database.

    2.3. Research Design. Figure 4 shows a design of the recogni-tion module.

    2.4. Preprocessing. Preprocessing is an important phase ofimage processing where the quality of the images isenhanced. The image acquisition process comes with variousforms of noise. The preprocessing techniques are used todenoise the images making them better conditioned forrecognition.

    2.4.1. Fast Fourier Transform. Fast Fourier Transform (FFT)was adopted as a noise reduction mechanism. According toGlynn [9], the FFT algorithm reduces the computational bur-den to OðN log NÞ arithmetic operations.

    Originalimage

    Le� halfimage

    Reconstructedimage

    Right halfimage

    Reconstructedimage

    Figure 3: Reconstructed images from left and right segmented half images.

    Classifier

    Knowledge creation

    Recognized imageTest image

    Feature extractionPCA/SVD

    Preprocessing(fast Fourier

    transform & meancentering)

    Train imagedatabase

    Figure 4: Research design.

    3Journal of Applied Mathematics

  • The DFT of a column vector ajk is represented mathe-matically as

    a∗jk = DFT ajk� �

    = 〠p−1

    r=0ajke−i 2πsr/pð Þ, ð3Þ

    where s = 0, 1,⋯, p − 1, j = 1, 2,⋯, n. ajk is the kth column ofthe image matrix, Aj [8].

    Due to the Gaussian nature of illumination variations,the Gaussian filter is adopted for filtering the face imagesafter the Discrete Fourier transformation. After filtering, theinverse Discrete Fourier transformation (IDFT) was per-formed to reconstruct images into their original forms. Theinverse Discrete Fourier transformation (IDFT) is given by

    ajk = IDFT a∗jkn o

    =1p〠p−1

    r=0a∗jkei 2πsr/pð Þ,

    s = 0, 1,⋯, p − 1, j = 1, 2,⋯, n:

    ð4Þ

    The real components of the inverse transformations areextracted for the feature extraction stage whereas the imagi-nary component is discarded as noise. Figure 5 shows thestages in FFT preprocessing of an image (Avuglah et al. [8]).

    2.5. Feature Extraction. Face image space is very large, and itsstorage requires reduction of the dimensions of originalimages using feature extraction methods. This study adoptedPrincipal Component Analysis (PCA) proposed by Kirbyand Sirovich [10] for feature extraction. PCA extracts themost significant components or those components which

    are more informative and less redundant, from the originaldata. According to Shlens [11], PCA computes the mostmeaningful basis to reexpress a noisy garbled dataset. Therationale behind this new basis is to filter out the noise andreveal hidden dynamics in the dataset.

    2.6. Implementation of FFT-PCA/SVD Algorithm. As notedearlier, used images were extracted from the MassachusettsInstitute of Technology (MIT) (2003-2005) database andJapanese Female Facial Expressions (JAFFE). The study con-sidered subjects captured under 0° pose from the MIT data-base and neutral facial expressions from JAFFE. Twentyface images, ten each from the MIT database and JAFFE,were trained, and their reconstructed images from left andright segmented half images were used for testing the recog-nition algorithm. To keep the data uniform, captured imageswere digitized into gray-scale precision and resized into 200× 200 dimensions and the data types changed into doubleprecision for preprocessing.

    The FFT-PCA/SVD algorithm was used to train theimage database. Unique face features of the training set areextracted and stored in memory during the training phase.

    Now, given the sample X = ðX1,X2,⋯,XnÞ, whose ele-ments are the vectorised form of the individual images inthe study. The mean centering preprocessing mechanism isperformed by subtracting the mean image from the individ-ual images under study. The mean is given by

    �aj = E X j� �

    , j = 1, 2,⋯, n,

    �aj =1N〠N

    i=1Xji =

    1N〠p

    i=1〠p

    k=1ajik, j = 1, 2, 3,⋯, n,

    ð5Þ

    Original image Reconstructed image

    Transformed image Filtered image

    Discrete Fouriertransform

    (DFT)

    Inverse discreteFourier transform

    (IDFT)

    Gaussianfiltering

    Figure 5: FFT preprocessing cycle.

    4 Journal of Applied Mathematics

  • where N = ðp × pÞ, length (= rows of image × columns ofimage) of the image data, X j.

    According to Avuglah et al. [8], the variance-covariancematrix C of the image space is given as

    C = 1nWWT , ð6Þ

    where the mean centered matrix is W = ðw1,w2,⋯,wnÞ.The eigenvalues and their corresponding eigenvectors are

    extracted from singular value decomposition (SVD) of thematrix, C =UΣVT .

    This decomposes the covariance matrix C into twoorthogonal matrices U and V and a diagonal matrix Σ:

    zj = 〠n

    j=1w juj, ð7Þ

    where uj is the jth column vector of U.From the training set, the principal components are

    extracted as

    γj = zTj x j − �a� �

    , ð8Þ

    and ΓT = ½γ1, γ2,⋯, γn�.

    When a new face (test image) is passed through the rec-ognition module, its unique features are extracted as

    γ∗j = zTj xr − �að Þ, ð9Þ

    and Γ∗Tr = ½γ∗1 , γ∗2 ,⋯, γ∗n �.The recognition distances (Euclidean distances) are com-

    puted as

    ψ = Γ − Γ∗rk k: ð10Þ

    The minimum Euclidean distance dji =min ½ψ�, j = 1, 2,⋯, n, i = 1, 2, is chosen as the recognition distance.

    3. Results and Discussion

    Figures 6 and 7 present the recognition matches and dis-tances of the left and right reconstructed face images. It canbe seen in Figure 6 that there were two mismatches whenthe right reconstructed images are used as test images for rec-ognition. Also, there was one mismatch when the left recon-structed images are used as test images for recognition. Allrecognitions on the MIT (2003-2005) database gave correctmatches. This is evident from Figure 7.

    Correctmatch

    Wrongmatch

    355.42

    374.22

    398.13

    408.35

    506.80

    251.93

    552.20

    298.72

    518.30

    627.05

    578.98

    540.40

    523.28

    441.84

    483.07

    766.47

    505.59

    299.94

    772.00

    430.37

    Originalimage

    Le�reconstructed

    image

    Recognitiondistance

    Recognitionstatus

    Originalimage

    Rightreconstructed

    image

    Recognitiondistance

    Recognitionstatus

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Wrongmatch

    Wrongmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Figure 6: Recognition of the reconstructed images (JAFFE).

    5Journal of Applied Mathematics

  • 3.1. Numerical Evaluations. The average recognition rate wasthe numerical assessment criteria used in this study. Theaverage recognition rate, Ravg, of an algorithm is defined as

    Ravg =∑truni=1nicrtrun × ntot

    × 100, ð11Þ

    where trun is the number of experimental runs, nicr is the

    number of correct recognitions in the ith run, and ntot isthe total number of faces being tested in each run [7]. Theaverage error rate, Eavg, is defined as 100 − Ravg.

    The total number of correct recognitions ∑10i−1nicr for the

    study algorithm is 19.The total number of experimental runs trun = 10, and the

    total number of images in a single experimental run ntot = 2.Now, using the left reconstructed face images as test

    images, the average recognition rate of the study algorithm(FFT-PCA/SVD) is

    Ravg =19

    10ð Þ 2ð Þ × 100 = 95%: ð12Þ

    The average error rate

    Eavg = 100 − Ravg = 100 − 95 = 5%: ð13Þ

    Using the right reconstructed half-face images as testimages, the average recognition rate of the study algorithm(FFT-PCA/SVD) is

    Ravg =18

    10ð Þ 2ð Þ × 100 = 90%: ð14Þ

    The average error rate

    Eavg = 100 − Ravg = 100 − 90 = 10%: ð15Þ

    3.2. Statistical Evaluation. The paired sample t-test is usuallyused to evaluate measurements of the same individuals/unitsrecorded under different environmental conditions. Thepaired responses may then be analysed by computing theirdifferences, thereby eliminating much of the influence ofthe extraneous unit to unit variation (Johnson and Wichern[12]). Let dj1 denote the recognition distance recorded usingthe left reconstructed images as test images and dj2 denotethe recognition distance using the right reconstructed half-

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    Correctmatch

    182.78

    196.77

    161.75

    76.714

    142.63

    285.82

    110.62

    188.30

    126.01

    237.59

    115.94

    105.75

    70.474

    196.87

    167.42

    209.96

    162.28

    96.63

    78.251

    135.60

    Originalimage

    Originalimage

    Le�reconstructed

    image

    Recognitiondistance

    Recognitionstatus

    Rightreconstructed

    image

    Recognitiondistance

    Recognitionstatus

    Figure 7: Recognition of the reconstructed images (MIT 2003-2005 database).Ex

    pect

    ed n

    orm

    al

    Observed value

    –3

    –500 –250 0 250 500

    –2

    –1

    0

    1

    2

    3

    Figure 8: Normal Q‐Q plot of observed difference.

    6 Journal of Applied Mathematics

  • face images as test images for the jth individual, then thepaired differences

    Dj = dj1 − dj2, j = 1, 2,⋯, n, ð16Þ

    should reflect the differential effects of the treatments. Giventhat the difference Dj, j = 1, 2,⋯, n, are independent obser-vations from Nðδ, σ2dÞ distribution, the statistic

    t =�D − δsd/

    ffiffiffin

    p , ð17Þ

    where

    �D =1n〠n

    j=1Dj,

    s2d =1

    n − 1〠n

    j=1Dj − �D� �2

    ð18Þ

    has a t-distribution with n − 1 degrees of freedom. Conse-quently, an α-level test of the hypothesis

    H0:δ = 0 (mean difference of recognition distances fromleft and right reconstructed images is zero) against

    H1:δ ≠ 0is conducted by comparing ∣t ∣ with tn−1ðα/2Þ, the

    upper 100ðα/2Þ percentile of the t-distribution with n − 1degree of freedom. A 100ð1 − αÞ confidence interval forthe mean difference in recognition distance δ = Eðdj1 − dj2Þis constructed as

    �d ± tn−1α

    2

    � � sdffiffiffin

    p : ð19Þ

    This test is sensitive to the assumption that the paireddifference should come from a univariate normal popula-tion. The Shapiro-Wilk test on the observed differencegave a value of 0.980 with a p value of 0.933. This showsthat the distribution of the observed difference is the sameas the expected distribution (normal). TheQ‐Q plot shown inFigure 8 confirms that the observed difference is normallydistributed.

    The paired sample correlations between the distance forthe left and right reconstructed face images are 0.858 with ap value of 0.000. This indicates that there exists a strong pos-itive linear relationship between the Euclidean distance forthe left and right reconstructed face images. The p value of0.000 shows that this relationship is significant. The resultsof the paired sample t-test are shown in Table 1.

    From Table 1, the average difference between the left andright reconstructed face images (LRI and RRI, respectively) is42.5865. The test statistic value from the paired sample test is0.928 corresponding to a p value of 0.365. It can be inferredfrom the p value that there is no significant differencebetween the average recognition distance for the left andright reconstructed face images. This means that the recon-structed images have the same average recognition distancesat 5% level of significance when used as test images.

    4. Conclusion and Recommendation

    The average recognition rates for the FFT-PCA/SVD algo-rithm were 95% and 90% when left and right recon-structed face images are used as test images, respectively.It could be inferred from the above results that the recogni-tion algorithm has relatively higher performance when leftreconstructed images are used as test images. The statisticalassessment revealed that there is no significant differencebetween the average recognition distances for the left andright reconstructed images.

    It can therefore be concluded that the average recognitiondistance for left reconstructed images is not significantly dif-ferent from the average recognition distance for right recon-structed images when FFT-PCA/SVD is used for recognition.These results are consistent with the findings of Singh andNandi [6]. It is worthy of note that apart from the numericalevaluations which are mostly adopted in literature, this studyused a more data-driven approach or a statistical approach toalso assess the performance of the recognition algorithm onleft and right reconstructed face image databases. The perfor-mances of FFT-PCA/SVD on both databases are viable. FFT-PCA/SVD is therefore recommended for recognition of leftand right reconstructed face images.

    Data Availability

    The image database supporting this research article is frompreviously reported studies and datasets, which have beencited. The processed data are available from the correspond-ing author upon request.

    Conflicts of Interest

    The authors declare that there is no conflict of interest.

    References

    [1] M. Turk and A. Pentland, “Face recognition using eigenfaces,”Proceedings. 1991 IEEE computer society conference on com-puter vision and pattern recognition, pp. 586-587, 1991.

    Table 1: Paired sample t-test.

    RRI-LRIStd. Dev. SE

    95% CIt df Sig.

    Mean Lower Upper

    42.58565 205.2544 45.89628 -53.47637 138.64767 0.928 19 0.365

    7Journal of Applied Mathematics

  • [2] X. Wei, Unconstrained face recognition with occlusions, Ph.D.thesis University of Warwick, 2014, http://go.warwick.ac.uk/wrap/66778.

    [3] H. Jia and A. M. Martinez, “Support vector machines in facerecognition with occlusions,” in 2009 IEEE Conference onComputer Vision and Pattern Recognition, pp. 136–141,Miami, FL, USA, 2009.

    [4] D. O’Mara and R. Owens, “Measuring bilateral symmetry indigital images,” in Proceedings of Digital Processing Applica-tions (TENCON’96), pp. 151–156, Perth, WA, Australia, Aus-tralia, 1996.

    [5] J. Harguess and J. K. Aggarwal, “Is there a connection betweenface symmetry and face recognition?,” CVPR 2011 WORK-SHOPS, pp. 66–73, 2011.

    [6] A. K. Singh and G. C. Nandi, “Face recognition using facialsymmetry,” Proceedings of the Second International Conferenceon Computational Science, Engineering and Information Tech-nology, pp. 550–554, 2012.

    [7] L. Asiedu, A. Adebanji, F. Oduro, and F. O. Mettle, “Statisticalevaluation of face recognition techniques under variable envi-ronmental constraints,” International Journal of Statistics andProbability, vol. 4, pp. 93–111, 2015.

    [8] R. K. Avuglah, L. Asiedu, E. N. Nortey, and F. N. Yirenkyi,“Recognition of face images under varying head-poses usingfft-pca/svd algorithm,” Far East Journal of Mathematical Sci-ences (FJMS), vol. 103, no. 11, pp. 1769–1788, 2018.

    [9] E. Glynn, Fourier analysis and image processing’, scientific pro-grammer. Bioinformatics, S Towers Institute for medicalResearch, 2007.

    [10] M. Kirby and L. Sirovich, “Application of the karhunen-loeveprocedure for the characterization of human faces,” IEEETransactions on Pattern Analysis and Machine Intelligence,vol. 12, no. 1, pp. 103–108, 1990.

    [11] J. Shlens, “A tutorial on principal component analysis: deriva-tion, discussion and singular value decomposition,” Mar,vol. 25, p. 16, 2003.

    [12] R. A. Johnson and D.W.Wichern, Applied Multivariate Statis-tical Analysis Volume 5, Prentice hall Upper Saddle River, NJ,2002.

    8 Journal of Applied Mathematics

    http://go.warwick.ac.uk/wrap/66778http://go.warwick.ac.uk/wrap/66778

    Recognition of Reconstructed Frontal Face Images Using FFT-PCA/SVD Algorithm1. Introduction2. Materials and Methods2.1. Source of Data2.2. Image Reconstruction2.3. Research Design2.4. Preprocessing2.4.1. Fast Fourier Transform

    2.5. Feature Extraction2.6. Implementation of FFT-PCA/SVD Algorithm

    3. Results and Discussion3.1. Numerical Evaluations3.2. Statistical Evaluation

    4. Conclusion and RecommendationData AvailabilityConflicts of Interest