15
2DR 1 -PCA and 2DL 1 -PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liu a,b , Xiao-Jun Wu a,b,* , Zi-Qi Li a,b a Jiangnan University, School of Internet of Things Engineering, Lihu Avenue, Wuxi, China b Jiangsu Provincial Laboratory of Pattern Recognition and Computational Intelligence, Lihu Avenue, Wuxi, China Abstract. In this paper, two novel methods: 2DR 1 -PCA and 2DL 1 -PCA are proposed for face recognition. Compared to the traditional 2DPCA algorithm, 2DR 1 -PCA and 2DL 1 -PCA are based on the R 1 norm and L 1 norm, respectively. The advantage of these proposed methods is they are less sensitive to outliers. These proposed methods are tested on the ORL, YALE and XM2VTS databases and the performance of the related methods is compared experimentally. Keywords: Face recognition; 2DR 1 -PCA; 2DL 1 -PCA; R 1 norm; L 1 norm. *Xiao-Jun Wu, wu [email protected] 1 Introduction Feature extraction by dimensionality reduction is a critical step in pattern recognition. Princi- pal component analysis (PCA) is a classic method for dimensionality reduction in the field of face recognition, which was proposed by Turk and Pentland in Ref. 1. Yang et al. 2 presented two-dimensional PCA (2DPCA) to improve the efficiency of feature extraction, in which image matrices were used directly. Two-dimensional weighted PCA (2DWPCA) was developed in Ref. 3 to improve the performance of 2DPCA. The complete 2DPCA method was presented in Ref. 4 to reduce the feature coefficients needed for face recognition compared to 2DPCA. In kernel PCA (KPCA), 5 samples were mapped into a high dimensional and linearly separable kernel space and then PCA was employed for feature extraction. Chen et al. 6 presented a pattern classification method based on PCA and KPCA (kernel principal component analysis), in which within-class auxiliary training samples were used to improve the performance. Liu et al. 7 proposed a 2DECA method, in which features are selected in 2DPCA subspace based on the Renyi entropy contri- bution instead of cumulative variance contribution. Moreover, some approaches based on linear 1 arXiv:1912.10768v1 [cs.CV] 23 Dec 2019

Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

2DR1-PCA and 2DL1-PCA: two variant 2DPCA algorithms based onnone L2 norm

Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b

aJiangnan University, School of Internet of Things Engineering, Lihu Avenue, Wuxi, ChinabJiangsu Provincial Laboratory of Pattern Recognition and Computational Intelligence, Lihu Avenue, Wuxi, China

Abstract. In this paper, two novel methods: 2DR1-PCA and 2DL1-PCA are proposed for face recognition. Comparedto the traditional 2DPCA algorithm, 2DR1-PCA and 2DL1-PCA are based on the R1 norm and L1 norm, respectively.The advantage of these proposed methods is they are less sensitive to outliers. These proposed methods are tested onthe ORL, YALE and XM2VTS databases and the performance of the related methods is compared experimentally.

Keywords: Face recognition; 2DR1-PCA; 2DL1-PCA; R1 norm; L1 norm.

*Xiao-Jun Wu, wu [email protected]

1 Introduction

Feature extraction by dimensionality reduction is a critical step in pattern recognition. Princi-

pal component analysis (PCA) is a classic method for dimensionality reduction in the field of

face recognition, which was proposed by Turk and Pentland in Ref. 1. Yang et al.2 presented

two-dimensional PCA (2DPCA) to improve the efficiency of feature extraction, in which image

matrices were used directly. Two-dimensional weighted PCA (2DWPCA) was developed in Ref. 3

to improve the performance of 2DPCA. The complete 2DPCA method was presented in Ref. 4 to

reduce the feature coefficients needed for face recognition compared to 2DPCA. In kernel PCA

(KPCA),5 samples were mapped into a high dimensional and linearly separable kernel space and

then PCA was employed for feature extraction. Chen et al.6 presented a pattern classification

method based on PCA and KPCA (kernel principal component analysis), in which within-class

auxiliary training samples were used to improve the performance. Liu et al.7 proposed a 2DECA

method, in which features are selected in 2DPCA subspace based on the Renyi entropy contri-

bution instead of cumulative variance contribution. Moreover, some approaches based on linear

1

arX

iv:1

912.

1076

8v1

[cs

.CV

] 2

3 D

ec 2

019

Page 2: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

discriminant analysis (LDA) were explored.8–10

Contrast to the above L2 norm based methods, Kwak11 developed L1-PCA by using L1 norm.

Ding et al.12 proposed a rotational invariant L1 norm PCA (R1-PCA). These none L2 norm based

algorithms are less sensitive to the presence of outliers.

In this paper we propose 2DR1-PCA and 2DL1-PCA algorithms for face recognition by uti-

lizing the advantages of L1 norm method and 2DPCA. Instead of using image vectors in R1-PCA

and L1-PCA, we use image matrices in 2DR1-PCA and 2DL1-PCA directly for features extraction.

Compared to the 1-D methods, the corresponding 2-D methods have two main advantages: higher

efficiency and recognition accuracy. We extend R1-PCA and L1-PCA to their two dimensional

case and the 2DR1-PCA and 2DL1-PCA methods are proposed.

This paper is organized as follows: We give a brief introduction to the R1-PCA and L1-PCA

algorithms in Section 2. In Section 3, the 2DR1-PCA and 2DL1-PCA algorithms are proposed.

In Section 4, the mentioned methods are compared through experiments. Finally, conclusions are

drawn in Section 5.

2 Fundamentals of subspace methods based on none L2 norm

In this paper, we use X = [x1, x2, x3, . . . , xi, . . . , xn] to denote the training set of 1-D methods,

where xi is a d-dimensional vector.

2.1 R1-PCA

R1-PCA algorithm tries to find a subspace by minimizing the following error function

ER1 = ‖X −WV ‖R1 (1)

2

Page 3: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

where W is the projection matrix, V is defined as V = W TX , and ‖ · ‖R1 denotes the R1 norm,

which is defined as

‖X‖R1 =n∑

i=1

(d∑

j=1

x2ji

) 12

(2)

In R1-PCA algorithm, the training set X should be centered, i.e.,xi = xi − x̄, where x̄ is the

mean vector of X , which is given by x̄ = 1n

∑ni=1 xi.

The principal eigenvectors of the R1-covariance matrix is the solution to R1-PCA algorithm.

The weighted version of R1-covariance matrix is defined as

Cr =∑i

ωixixTi , ω

(L1)i =

1

‖xi −WW Txi‖(3)

The weight has many forms of definitions. For the Cauchy robust function, the weight is

ω(C)i =

(1 +

∥∥xi −WW Txi

∥∥2 /c2)−1

(4)

The basic idea of R1-PCA is starting with an initial guess W (0) and then iterate W with the

following equations until convergence

W (t+ 1

2) = Cr

(W (t)

)W (t)

W (t+1) = orthoronalize(W (t+ 1

2)) (5)

The concrete algorithm is given in Algorithm 1.

3

Page 4: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

Algorithm 1 R1-PCA algorithmInput: The training set X = [x1, x2, x3, · · · , xi, · · · , xn] and the subspace dimension k. Then the

training set X is centered.1: Initialization: Compute standard PCA and obtain W0. Set W = W0.2: Calculate the Cauchy weight:

Compute residue Si =√

xTi xi − xT

i WW Txi

Compute c = me dian (si)

Compute ωi =(

1 +∥∥xi −WW Txi

∥∥2 /c2)−1

3: Calculate the covariance matrix: Cr =∑

i ωixixTi

4: Update W :W (t+ 1

2) = Cr

(W (t)

)W (t)

W (t+1) = orthoronalize(W (t+ 1

2))

5: Convergence check:If W t+1 6= W t, go to Step 4.Else go to Step 6.

6: Calculate the uncentered data: ∀i, xi = xi + x̄.7: Projection: V = W TX .

Output: W and V .

2.2 L1-PCA

The L1 norm is used in L1-PCA for minimizing the following error function

EL1 = ‖X −WV ‖L1 (6)

where W is the projection matrix, V is defined as V = W TX , and ‖ · ‖L1 denotes the L1 norm,

which is defined as

‖X‖L1 =d∑

i=1

n∑j=1

|Xij| (7)

In order to obtain a subspace with the property of robust to outliers and invariant to rotations,

the L1 norm is adopted to maximize the following equation

W ∗ = maxW

∥∥W TX∥∥L1

, subject to W TW = I (8)

4

Page 5: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

It is difficult to solve the multidimensional version. Instead of using projection matrix W , a

column vector w is used in equation (8) and the following equation is obtained

w∗ = maxw

∥∥wTX∥∥L1

, subject to ‖w‖2 = 1 (9)

Then a greedy search method is used for solving (9), which is summarized in Algorithm 2.

Algorithm 2 L1-PCAInput: The training set X = [x1, x2, x3, · · · , xi, · · · , xn].

1: Initialization: Initialize w0 by random numbers. Then set w(0) = w(0)/‖w(0)‖2, t = 02: Polarity check: ∀i ∈ {1, 2, 3, · · · , n}, if wT (t)xi < 0, pi(t) = −1, otherwise, pi(t) = 1.3: Flipping and maximization: Set t = t + 1, w(t) =

∑ni=1 pi(t− 1)xi, w(t) = w(t)/‖w(t)‖2.

4: Convergence check:If w(t) 6= w(t− 1), go to step 2.Else if i exists such that wT (t)xi = 0, set w(t) = (w(t) + ∆w)/‖w(t) + ∆w‖2, where ∆w isa small nonzero random vector. Go to step 2.Otherwise, set w∗ = w(t) and stop.

Output: The projection vector w.

One best feature is extracted by the above algorithm. In order to obtain a k dimensional pro-

jection matrix instead of a vector, an algorithm based on the greedy search method is given as

follows

w0 = 0, {x0i = xi}ni=1

For j = 1 to k

∀i ∈ {1, 2, 3 · · · , n}, xji = xj−1

i − wj−1

(wT

j−1xj−1i

)Apply the L1-PCA procedure to Xj =

[xj1, · · · , xj

n

]to find wj

End

5

Page 6: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

3 2DR1-PCA and 2DL1-PCA algorithms

In 2-D methods, F = [F1, F2, F3, · · · , Fi, · · · , Fn] is used to denote the training set, where Fi is a

r × n′ matrix.

3.1 2DR1-PCA

In this paper we propose 2DR1-PCA algorithm, in which we iterate the projection matrix W with

an initial matrix W 0 until convergence.

First, the training set F is centered, i.e., Fi = Fi− F̄ , where F̄ is the mean matrix of F , defined

as F̄ = 1n

∑ni=1 Fi.

The R1 covariance matrix is defined as

Cr =n∑

i=1

ωiFiFTi (10)

The Cauchy weight is defined as

W

(C)i =

(1 +

∥∥Fi −WW TFi

∥∥2F/c2)−1

c = median (si)

(11)

The residue si is defined as

si =∥∥Fi −WW TFi

∥∥F

(12)

After obtaining the eigenvectors of Cr, the iterative formula is similar to which used in the

R1-PCA algorithm W (t+ 1

2) = Cr

(W (t)

)W (t)

W (t+1) = orthoronalize(W (t+ 1

2)) (13)

6

Page 7: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

The 2DR1-PCA algorithm is outlined in Algorithm 3.

Algorithm 3 The 2DR1-PCA algorithmInput: The training set F = [F1, F2, F3, · · · , Fi, · · · , Fn] and the subspace dimension k. Then the

training set F is centered.1: Initialization: Compute standard 2DPCA and obtain W0. Set W = W0.2: Calculate the Cauchy weight:

Compute residue si =∥∥Fi −WW TFi

∥∥F

.Compute c = median (si).

Compute ω(C)i =

(1 +

∥∥Fi −WW TFi

∥∥2F/c2)−1

3: Calculate the covariance matrix: Cr =∑n

i=1 ωiFiFTi

4: Update W :W (t+ 1

2) = Cr

(W (t)

)W (t)

W (t+1) = orthoronalize(W (t+ 1

2))

5: Convergence check:If W t+1 6= W t, go to Step 4.Else go to Step 6.

6: Calculate the uncentered data: ∀i, Fi = Fi + F̄7: Projection: V = W TX .

Output: W and V .

3.2 2DL1-PCA

Compared to L1-PCA, in the two dimensional case we want to find a column vector to solve the

following problem

w∗ = maxw

∥∥wTF∥∥L1

= maxw

n∑i=1

∥∥wTFi

∥∥L1

, subject to ‖w‖2 = 1 (14)

In fact, wTF is a row vector. The number of maximum absolute value in a vector contributes

most to its L1 norm. Assume that the column index of the maximum absolute value in wTF is i,

we can calculate w∗ by the ith column of Fi. The 2DL1-PCA algorithm is given in Algorithm 4.

Then we can obtain a k dimensional projection matrix from the following algorithm.

w0 = 0, {F 0i = Fi}ni=1.

7

Page 8: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

Algorithm 4 The 2DL1-PCA algorithmInput: The training set F = [F1, F2, F3, · · · , Fi, · · · , Fn].

1: Initialization: Pick any w0. set w(0) = w(0)/‖w(0)‖2, and t = 0.2: Polarity check: For all i ∈ {1, · · · , n}, v = wT (t)Fi, [mv mi] = max(abs(v)), qi(t) = mi, if

v(mi) < 0, pi(t) = −1,else pi(t) = 1.3: Flipping and maximization: Set t = t + 1, and w(t) =

∑ni=1 pi(t − 1)Fi (:, qi(t− 1)), Set

w(t) = w(t)/‖w(t)‖2.4: Convergence check:

If w(t) 6= w(t− 1), go to step 3.Else if i exists such that wT (t)Fi (:, qi (t)) = 0, set w(t) = (w(t) + ∆w(t))/‖w(t) + ∆w‖2,and go to step 3. Here ∆w is a small nonzero random vector.Otherwise, set w∗ = w(t) and stop.

Output: The projection vector w.

For j = 1 to k

∀i ∈ {1, 2, 3 · · · , n}, F ji = F j−1

i − wj−1

(wT

j−1Fj−1i

).

Apply the L1-PCA procedure to F j =[F j1 , · · · , F j

n

]to find wj .

End

4 Experimental results and analysis

Three databases: ORL, Yale and XM2VTS are used to test methods mentioned above. The recog-

nition accuracy and running time of extracting features are recorded.

The ORL database contains face images from 40 different people and each person has 10

images, the resolution of which is 92×112. Variation of expression (smile or not) and face details

(wear a glass or not) are contained in the ORL database images. In the following experiments, 5

images are selected as the training samples and the rest are selected as the test samples.

The Yale database is provided by Yale University. This database contains face images from 15

different people and each has 11 images. The resolution of Yale database images is 160×121. In

the following experiments, 6 images are selected as the training samples and the rest are selected

as the test samples.

8

Page 9: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

The XM2VTS13 database offers synchronized video and speech data as well as image se-

quences allowing multiple view of the face. It contains frontal face images taken of 295 subjects at

one month intervals taken over a period of few months. The resolution of XM2VTS is 55×51. In

the following experiments, 4 images are selected as the training samples and the rest are selected

as the test samples.

4.1 R1-PCA and 2DR1-PCA

The experimental results of R1-PCA and 2DR1-PCA are shown in Table 1, and the number of

iterations of R1-PCA and 2DR1-PCA is 120.

Table 1 Experimental results of R1-PCA and 2DR1-PCAORL Yale XM2VTS

Recognition accuracyPCA 0.90 0.77 0.71

R1-PCA 0.88 0.77 0.712DR1-PCA 0.90 0.80 0.78Running time

PCA 1.37 0.36 17.27R1-PCA 914.21 411.06 1409.30

2DR1-PCA 403.90 372.76 619.78

The initial projection matrix W 0 is obtained by PCA (2DPCA) at the beginning of R1-PCA

(2DR1-PCA). The final projection matrix W is obtained by an iterative method starting with W 0.

As a result of the iteration, the computational complexity is high. Meanwhile, they have nearly the

same recognition accuracy.

In the experiment of R1-PCA algorithm tested on the ORL database, the convergence process

is shown in Fig. 1 (a), in which the y-coordinate denotes the norm of projection matrix and the

x-coordinate denotes the number of iterations. The norm of a projection matrix is used to observe

its convergent process. After iterating at least 100 times the projection matrix W converges. As a

9

Page 10: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

comparison, 2DR1-PCA just needs less than 30 iteration to obtain a convergent projection matrix,

which is shown in Fig. 1 (b). Image matrices used in 2DR1-PCA leads to a faster convergence.

(a) (b)

Fig 1 The convergence illustration of iterating 120 times on the ORL database. (a) R1-PCA. (b) 2DR1-PCA.

The convergence illustration tested on the Yale database is shown in Fig. 2. The conver-

gent speed of R1-PCA is similar to that of 2DR1-PCA. In the experiment tested on the XM2VTS

database, the convergent speed of 2DR1-PCA is much faster than that of R1-PCA shown in Fig. 3.

In other words, the efficiency of 2DR1-PCA is higher than that of R1-PCA.

(a) (b)

Fig 2 The convergence illustration of iterating 120 times on the Yale database. (a) R1-PCA. (b) 2DR1-PCA.

10

Page 11: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

(a) (b)

Fig 3 The convergence illustration of iterating 120 times on the XM2VTS database. (a) R1-PCA. (b) 2DR1-PCA.

4.2 L1-PCA and 2DL1-PCA

The experimental results of L1-PCA and 2DL1-PCA are shown in Table 2.

Table 2 Experimental results of L1-PCA and 2DL1-PCAORL Yale XM2VTS

Recognition accuracyPCA 0.90 0.77 0.71

L1-PCA 0.90 0.76 0.712DL1-PCA 0.91 0.80 0.76Running time

PCA 1.37 0.36 17.27L1-PCA 15.96 5.15 83.52

2DL1-PCA 3.52 3.62 40.43

From Table 2 we can see that the performance of 2DL1-PCA is better than that of L1-PCA and

PCA. In 2DL1-PCA, image matrices are used directly for feature extraction. Features extracted by

2DL1-PCA is less than features extracted by L1-PCA.

We implement another experiment on the ORL database. Different number of features is

extracted by PCA, L1-PCA and 2DL1-PCA, respectively. Then these features are used for face

recognition. The experimental result is shown in Fig. 4, from which we can see that less features

extracted by 2DL1-PCA achieves a higher recognition accuracy.

11

Page 12: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

Number of features

Reco

gniti

on a

ccur

acy

Fig 4 Recognition accuracy versus different number of features on the ORL database.

5 Conclusions

In this paper we proposed 2DR1-PCA and 2DL1-PCA for face recognition. We extend R1-PCA

and L1-PCA to their 2-D case so that image matrices could be directly used for feature extrac-

tion. Compared to the L2 norm based methods, these L1 norm based methods are less sensitive

to outliers. We analyze the performance of 2DR1-PCA and 2DL1-PCA against R1-PCA and L1-

PCA algorithms based on experiments. The experimental results show that the performance of

2DR1-PCA and 2DL1-PCA is better than that of R1-PCA and L1-PCA, respectively.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant

No.61672265 and U1836218) and the 111 Project of Ministry of Education of China (Grant No.

12

Page 13: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

B12018).

References

1 M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of cognitive neuroscience

3(1), 71–86 (1991).

2 J. Yang, D. Zhang, A. F. Frangi, et al., “Two-dimensional pca: a new approach to appearance-

based face representation and recognition,” IEEE transactions on pattern analysis and ma-

chine intelligence 26(1), 131–137 (2004).

3 V. D. M. Nhat and S. Lee, “Two-dimensional weighted pca algorithm for face recognition,” in

2005 International Symposium on Computational Intelligence in Robotics and Automation,

219–223, IEEE (2005).

4 A. Xu, X. Jin, Y. Jiang, et al., “Complete two-dimensional pca for face recognition,” in 18th

International Conference on Pattern Recognition (ICPR’06), 3, 481–484, IEEE (2006).

5 M.-H. Yang, N. Ahuja, and D. Kriegman, “Face recognition using kernel eigenfaces,” in

Proceedings 2000 International Conference on Image Processing (Cat. No. 00CH37101), 1,

37–40, IEEE (2000).

6 S. Chen, X. Wu, and H. Yin, “Kpca method based on within-class auxiliary training samples

and its application to pattern classification,” Pattern Analysis and Applications 20(3), 749–

767 (2017).

7 X. Liu and X.-J. Wu, “Eca and 2deca: Entropy contribution based methods for face recog-

nition inspired by keca,” in 2011 International Conference of Soft Computing and Pattern

Recognition (SoCPaR), 544–549, IEEE (2011).

13

Page 14: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

8 W. Xiao-Jun, J. Kittler, Y. Jing-Yu, et al., “A new direct lda (d-lda) algorithm for feature ex-

traction in face recognition,” in Proceedings of the 17th International Conference on Pattern

Recognition, 2004. ICPR 2004., 4, 545–548, IEEE (2004).

9 Y.-J. Zheng, J.-Y. Yang, J. Yang, et al., “Nearest neighbour line nonparametric discriminant

analysis for feature extraction,” Electronics Letters 42(12), 679–680 (2006).

10 Y.-j. Zheng, J. Yang, J.-y. Yang, et al., “A reformative kernel fisher discriminant algorithm

and its application to face recognition,” Neurocomputing 69(13-15), 1806–1810 (2006).

11 N. Kwak, “Principal component analysis based on l1-norm maximization,” IEEE transactions

on pattern analysis and machine intelligence 30(9), 1672–1680 (2008).

12 C. Ding, D. Zhou, X. He, et al., “R1-pca: rotational invariant l1-norm principal component

analysis for robust subspace factorization,” in Proceedings of the 23rd international confer-

ence on Machine learning, 281–288, ACM (2006).

13 K. Messer, J. Matas, J. Kittler, et al., “Xm2vtsdb: The extended m2vts database,” in Second

international conference on audio and video-based biometric person authentication, 964,

965–966 (1999).

List of Figures

1 The convergence illustration of iterating 120 times on the ORL database. (a) R1-

PCA. (b) 2DR1-PCA.

2 The convergence illustration of iterating 120 times on the Yale database. (a) R1-

PCA. (b) 2DR1-PCA.

14

Page 15: Abstract. arXiv:1912.10768v1 [cs.CV] 23 Dec 2019 · 2DR 1-PCA and 2DL 1-PCA: two variant 2DPCA algorithms based on none L 2 norm Xing Liua,b, Xiao-Jun Wua,b,*, Zi-Qi Lia,b aJiangnan

3 The convergence illustration of iterating 120 times on the XM2VTS database. (a)

R1-PCA. (b) 2DR1-PCA.

4 Recognition accuracy versus different number of features on the ORL database.

List of Tables

1 Experimental results of R1-PCA and 2DR1-PCA

2 Experimental results of L1-PCA and 2DL1-PCA

15