15
A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition q Hafiz Imtiaz , Shaikh Anowarul Fattah Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh article info Article history: Received 21 July 2012 Received in revised form 15 January 2013 Accepted 15 January 2013 Available online xxxx abstract A feature extraction algorithm for palm-print recognition based on two dimensional dis- crete wavelet transform is proposed in this paper, which efficiently exploits the local spa- tial variations in a palm-print image. The palm-image is segmented into several spatial modules and a palm-print recognition scheme is developed, which extracts histogram- based dominant wavelet features from each of these local modules. The effect of modular- ization in terms of the entropy content of the palm-print images has been analyzed. The selection of dominant features for the purpose of recognition not only drastically reduces the feature dimension but also captures precisely the detail variations within the palm- print image. It is shown that, the modularization of the palm-print image enhances the dis- criminating capabilities of the proposed features and thereby results in high within-class compactness and between-class separability of the extracted features. Different types of Daubechies wavelets (in terms of use of number of vanishing moments, i.e., db1–db10) have been utilized for the purpose of feature extraction and the effect upon the recognition performance has been also investigated. In order to further reduce the feature dimension, a principal component analysis is performed. It is found from our extensive experimenta- tions on different palm-print databases that the performance of the proposed method in terms of recognition accuracy and computational complexity is superior to that of some of the recent methods. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Automatic palm-print recognition has widespread applications in security, authentication, surveillance, and criminal identification. Although very popular, conventional ID card and password based identification methods are no longer reliable because of the use of several advanced techniques of forgery and password-hacking. As an alternative, biometric is being used for identity access management [1]. Biometric is defined as an intrinsic physical or behavioral trait of human beings and the main advantage of biometric features is that these are not prone to theft and loss, and do not rely on the memory of their users. Several biometrics, such as palm-print, finger-print, face and iris, remain almost unchanged over the life-time of a person. Moreover, it is difficult for one to alter his/her own physiological biometric or imitate that of others’. In security applications with a scope of collecting digital identity, the palm-prints are recently getting more attention among research- ers [2,3] among different biometrics. The inner surface of the palm normally contains three flexion creases, known as principal lines, and secondary creases and ridges. These complex line patterns are very useful in personal identification. Nevertheless, palm-print recognition is a com- 0045-7906/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2013.01.006 q Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek. Corresponding author. Tel.: +880 1677103153. E-mail addresses: hafi[email protected] (H. Imtiaz), [email protected] (S. Anowarul Fattah). Computers and Electrical Engineering xxx (2013) xxx–xxx Contents lists available at SciVerse ScienceDirect Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition

  • Upload
    shaikh

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Computers and Electrical Engineering xxx (2013) xxx–xxx

Contents lists available at SciVerse ScienceDirect

Computers and Electrical Engineering

journal homepage: www.elsevier .com/ locate/compeleceng

A histogram-based dominant wavelet domain feature selectionalgorithm for palm-print recognition q

0045-7906/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

q Reviews processed and approved for publication by Editor-in-Chief Dr. Manu Malek.⇑ Corresponding author. Tel.: +880 1677103153.

E-mail addresses: [email protected] (H. Imtiaz), [email protected] (S. Anowarul Fattah).

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Hafiz Imtiaz ⇑, Shaikh Anowarul FattahDepartment of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh

a r t i c l e i n f o

Article history:Received 21 July 2012Received in revised form 15 January 2013Accepted 15 January 2013Available online xxxx

a b s t r a c t

A feature extraction algorithm for palm-print recognition based on two dimensional dis-crete wavelet transform is proposed in this paper, which efficiently exploits the local spa-tial variations in a palm-print image. The palm-image is segmented into several spatialmodules and a palm-print recognition scheme is developed, which extracts histogram-based dominant wavelet features from each of these local modules. The effect of modular-ization in terms of the entropy content of the palm-print images has been analyzed. Theselection of dominant features for the purpose of recognition not only drastically reducesthe feature dimension but also captures precisely the detail variations within the palm-print image. It is shown that, the modularization of the palm-print image enhances the dis-criminating capabilities of the proposed features and thereby results in high within-classcompactness and between-class separability of the extracted features. Different types ofDaubechies wavelets (in terms of use of number of vanishing moments, i.e., db1–db10)have been utilized for the purpose of feature extraction and the effect upon the recognitionperformance has been also investigated. In order to further reduce the feature dimension, aprincipal component analysis is performed. It is found from our extensive experimenta-tions on different palm-print databases that the performance of the proposed method interms of recognition accuracy and computational complexity is superior to that of someof the recent methods.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Automatic palm-print recognition has widespread applications in security, authentication, surveillance, and criminalidentification. Although very popular, conventional ID card and password based identification methods are no longer reliablebecause of the use of several advanced techniques of forgery and password-hacking. As an alternative, biometric is beingused for identity access management [1]. Biometric is defined as an intrinsic physical or behavioral trait of human beingsand the main advantage of biometric features is that these are not prone to theft and loss, and do not rely on the memoryof their users. Several biometrics, such as palm-print, finger-print, face and iris, remain almost unchanged over the life-timeof a person. Moreover, it is difficult for one to alter his/her own physiological biometric or imitate that of others’. In securityapplications with a scope of collecting digital identity, the palm-prints are recently getting more attention among research-ers [2,3] among different biometrics.

The inner surface of the palm normally contains three flexion creases, known as principal lines, and secondary creases andridges. These complex line patterns are very useful in personal identification. Nevertheless, palm-print recognition is a com-

orithm

2 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

plicated visual task even for humans. The primary difficulty arises from the fact that different palm-print images of a par-ticular person may vary largely, while those of different people may not necessarily vary significantly. Moreover, some as-pects of palm-print images, such as presence of noise and variations in illumination, position, and scale, make the recognitiontask more complicated [4].

A typical palm-print recognition system consists of some major steps, namely, input palm-print image collection, pre-processing, feature extraction, classification and template storage or database. A balanced treatment on all the steps (orphases) of a biometric recognition system is necessary to obtain an acceptable performance rate [5]. However, the major fo-cus of this research is to develop an efficient algorithm to extract accurate features, which is very difficult in real-world sce-narios. Moreover, the quality of acquired images is a factor in extracting accurate features in case of many real-lifeapplications. Generally, either line-based or texture-based feature extraction algorithms are employed [6] in palm-print rec-ognition systems. In the line-based schemes, different edge detection methods are used to extract different palm lines (prin-cipal lines, wrinkles, ridges, etc.) [7,8]. The extracted edges, either directly or being represented in other formats or domains,are used for template matching. For example, Canny edge detector is used for detecting palm lines in [7], whereas featurevectors are formed based on low-resolution edge maps in [8].

It is to be mentioned that if more than one person were to possess similar principal lines, line based algorithms may resultin erroneous identification. The texture-based feature extraction schemes can be used to overcome this limitation, where thevariations existing in either the different blocks of images or the features extracted from those blocks are computed [9–16].In this regard, principal component analysis (PCA) or linear discriminant analysis (LDA) are usually employed directly onpalm-print image data or some popular transforms, such as Fourier transform, wavelet transform and discrete cosine trans-form (DCT), are used for extracting features from the image data. For example, in [11], statistical features of different sub-images are extracted from the DCT of the spatial data. The method in [15] proposed to perform wavelet decomposition andextract the most discriminating information from the low frequency data using 2D-PCA. The concept of texture energy isintroduced in [14] to define both global and local features of a palm-print, whereas [13] proposed an online multi-spectralpalm-print system which analyzes the extracted features from different bands and employs a score level fusion scheme tointegrate the multi-spectral information. Because of the property of shift-invariance, it is well known that wavelet based ap-proach is one of the most robust feature extraction schemes, even under variable illumination [17–19]. Various classifiers,such as decision-based neural networks and Euclidean distance based classifier, are employed for palm-print recognition[7,8] using the extracted features. Despite many relatively successful attempts to implement a robust palm-print recognitionsystem, a single approach, which combines accuracy and low computational burden, is yet to be developed.

In this paper, we propose to extract spatial variations from each local zone of the entire palm-print image instead of con-centrating on a single global variation pattern in order to achieve distinguishable features from different people. In the pro-posed palm-print recognition scheme, the palm-print image of a person is segmented into several small modules and awavelet domain feature extraction algorithm using two dimensional discrete wavelet transform (2D-DWT) is developedto extract histogram-based dominant wavelet coefficients corresponding to those spatial modules. The effect of modulariza-tion in terms of the entropy content of the palm-print images has been investigated. It is also shown that the discriminatingcapabilities of the proposed features are enhanced because of modularization of the palm-print image. In comparison to theFourier transform, the DWT is preferred as it possesses a better space-frequency localization. The variation of recognitionperformance with the module size, different types of Daubechies wavelets (in terms of use of number of vanishing moments,i.e., db1–db10) and presence of different types of noise has also been investigated. Moreover, the improvement of the qualityof the extracted features as a result of illumination adjustment has also been analyzed. Apart from considering only the dom-inant features, further reduction of the feature dimension is obtained by employing the PCA and the recognition task is thencarried out using a simple distance based classifier.

2. Proposed method

For any type of biometric recognition, the most important task is to extract distinguishing features from the templatedata, which directly dictates the recognition accuracy. In comparison to person recognition based on face or voice biometrics,palm-print recognition is very challenging even for a human being. For the case of palm-print recognition, obtaining a sig-nificant feature space with respect to the spatial variation in a palm-print image is very crucial. Moreover, a direct subjectivecorrespondence between palm-print features in the spatial domain and those in the wavelet domain is not very apparent. Inwhat follows, we are going to demonstrate the proposed feature extraction algorithm for palm-print recognition, where spa-tial domain local variation is extracted from wavelet domain transform.

2.1. Proposed feature

For biometric recognition, feature extraction can be carried out using mainly two approaches, namely, the spatial domainapproach and the frequency domain approach [20]. The spatial domain approach utilizes the spatial data directly from thepalm-print image or employs some statistical measure of the spatial data. On the other hand, frequency domain approachesemploy some kind of transform over the palm-print image for feature extraction. In case of frequency domain feature extrac-tion, pixel-by-pixel comparison between palm-print images in the spatial domain is not necessary. Phenomena, such as

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 3

rotation, scale and illumination, are more severe in the spatial domain than in frequency domain. Recently, multi-resolutionanalysis, such as wavelet analysis, is also getting popularity among researchers. In what follows, we intend to develop a fea-ture extraction algorithm based on multi-resolution transformation.

It is well-known that Fourier transform based palm-print recognition algorithms involve complex computations andchoices of spatial and frequency resolution are limited. In contrast, DWT offers a much better space-frequency localization.This property of the DWT is helpful for analyzing images, where the information is localized in space. The wavelet transformis analogous to the Fourier transform with the exception that it uses scaled and shifted versions of wavelets and the decom-position of a signal involves sum of these wavelets. The DWT kernels exhibit properties of horizontal, vertical and diagonaldirectionality.

It should be mentioned that continuous wavelet transform (CWT) and wavelet series (WS) are defined for continuous-time signal, while discrete wavelet transform (DWT) for discrete-time signal. Because of the pyramidal algorithm and dyadicsampling [21], the DWT can be computed fast and efficiently.

For a discrete-time sequence x[n], n 2 Z, DWT is defined by discrete-time multi-resolution decomposition [21], and can becomputed by the pyramidal algorithm,

Pleasefor pa

m0n ¼ x½n�; ð1Þ

mjn ¼ Rkmj�1

k~g½k� 2n�; j ¼ 1;2; . . . ; ð2Þ

xjn ¼ Rkmj�1

k~h½k� 2n�; j ¼ 1;2; . . . ; ð3Þ

where ~g and ~h are the analysis scaling and wavelet filters, respectively [21].The approximate wavelet coefficients are the high-scale low-frequency components of the signal, whereas the detail

wavelet coefficients are the low-scale high-frequency components. The 2D-DWT of a two-dimensional data is obtained bycomputing the one-dimensional DWT, first along the rows and then along the columns of the data. Thus, for a 2D data,the detail wavelet coefficients can be classified as vertical, horizontal and diagonal detail.

In the proposed palm-print recognition scheme, 2D-DWT is used for feature extraction. Several palm-print images of dif-ferent people have been investigated and it is observed that there exist some correspondences between palm-print featureson the spatial domain image and those on the corresponding wavelet domain transform. For example, in Fig. 1a–c, three sep-arate palm-print images of three different people are shown along with their corresponding approximate coefficients inFig. 1d–f. It is observed from these three figures that only strong low-frequency parts are visualized in the approximate coef-ficients’ plots. In Fig. 1g–i, the corresponding diagonal coefficients are shown. The diagonal parts of the creases of the palm-print images are highlighted in these plots. Similarly, in Fig. 1j–l and m–o, the horizontal and vertical coefficients of thepalm-prints are shown. Only horizontal parts of the creases of the palm-print images are highlighted in the horizontal coef-ficients’ plots, whereas, the vertical parts of the creases of the palm-print images are highlighted in the vertical coefficients’plots.

In order to demonstrate the effect of rotation on the extracted features in frequency domain, Fig. 2a and b shows twosynthetic palm-print images. Two palm-prints are basically the same, except that the second one is derived by rotation ofthe first one by 30�. In order to analyze the feature quality, one common approach is to measure the normalized Euclideandistance (NED). In this case, the NED between the pixel values of the original and rotated images is computed. Next, for boththe original and rotated images, the 2D-DWT is performed and vertical and horizontal coefficients are considered. The NEDbetween the wavelet coefficients (both vertical and horizontal) of the original and rotated images is computed. The NEDsobtained in the spatial and DWT domains are shown in Fig. 2c. It is intuitive that the difference between the original andthe rotated palm-print images in the spatial domain would be greater than the difference in the corresponding 2D-DWTcoefficients as in the later case the vertical and horizontal DWT coefficients precisely signify the changes in the verticaland horizontal directions in the wavelet domain. It is observed from the figure that the latter one provides orders of mag-nitude lower Euclidean distance as opposed to the earlier one, which is an indication to expect better feature quality inwavelet domain in comparison to that obtained in the spatial domain. However, it is to be noted that in order to obtain shiftand rotation invariant wavelet transform, instead of conventional DWT, one may employ undecimated DWT approaches[22,23].

2.2. Illumination adjustment

It is intuitive that palm-images of a particular person captured under different lighting conditions may vary significantly,which can affect the palm-print recognition accuracy. In order to overcome the effect of lighting variation in the proposedmethod, illumination adjustment is performed prior to feature extraction. Given two palm-print images of a single personhaving different intensity distributions due to variation in illumination conditions, our objective is to provide with similarfeature vectors for these two images irrespective of the difference in illumination conditions. Since in the proposed method,feature extraction is performed in the DWT domain, it is of our interest to analyze the effect of variation in illumination onthe DWT-based feature extraction.

In Fig. 3a, two palm-print images of the same person are shown, where the second image has a slightly lower averageillumination level. 2D-DWT operation is performed upon each image, first without any illumination adjustment and thenafter performing illumination adjustment. Considering all the 2D-DWT approximate coefficients to form the feature vectors

cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmlm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

(m)50 100 150

20

40

60

80

100

120

140

160

(n)50 100 150

20

40

60

80

100

120

140

160

(o)50 100 150

20

40

60

80

100

120

140

160

(j)50 100 150

20

40

60

80

100

120

140

160

(k)50 100 150

20

40

60

80

100

120

140

160

(l)50 100 150

20

40

60

80

100

120

140

160

(g)50 100 150

20

40

60

80

100

120

140

160

(h)50 100 150

20

40

60

80

100

120

140

160

(i)50 100 150

20

40

60

80

100

120

140

160

(a)100 200 300

50

100

150

200

250

300

350

(b)100 200 300

50

100

150

200

250

300

350

(c)100 200 300

50

100

150

200

250

300

350

(d)50 100 150

20406080

100120140160

(e)50 100 150

20406080

100120140160

(f)50 100 150

20406080

100120140160

Fig. 1. (a)–(c) Sample palm-print images of three different people, (d)–(f) corresponding approximate coefficients of wavelet transform, (g)–(i)corresponding diagonal coefficients of wavelet transform, (j)–(l) corresponding horizontal coefficients of wavelet transform and (m)–(o) correspondingvertical coefficients of wavelet transform.

4 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

for these two images, a measure of similarity can be obtained by using correlation. In Fig. 3b and c, the cross-correlation val-ues of the 2D-DWT approximate coefficients obtained by using the two images without and with illumination adjustmentare shown, respectively. It is evident from these two figures that the latter case exhibits more similarity between the DWTapproximate coefficients indicating that the features belong to the same person. The similarity measure in terms of Euclid-ean distances between the 2D-DWT approximate coefficients of the two images for the aforementioned two cases are alsocalculated and shown in Fig. 3d. It is observed that there exists a huge separation in terms of Euclidean distance when noillumination adjustment is performed, whereas the distance is very small when illumination adjustment is performed, asexpected, which clearly indicates that a better similarity between extracted feature vectors.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Raw palm−print images0

500

1000

1500

2000

2500

Euc

lidea

n di

stan

ce

50 100 150 200 250 300 350 400 450

50

100

150

200

250

300

350

400

450

500

55050 100 150 200 250 300 350 400 450

50

100

150

200

250

300

350

400

450

500

550

(a) (b)

(c)

Fig. 2. (a) An artificial palm-print images, (b) its rotated version and (c) comparison between the normalized Euclidean distances of two palm-prints andtheir 2D-DWT vertical and horizontal coefficients.

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 5

In the proposed method, the most conventional way of illumination adjustment, that is subtracting a certain percentageof the average intensity level from the entire palm image, is employed. Generally, this global approach of illuminationadjustment may provide better feature quality depending on the level of adjustment.

In this case, for each image pixel intensity f(i, j), some percentage of the average intensity value of the whole image issubtracted to obtain an illumination adjusted pixel intensity fadj(i, j), which, for an M � N dimensional image, can simplybe defined as

Pleasefor pa

fadjði; jÞ ¼ f ði; jÞ � bM � N

XM

i¼1

XN

j¼1

f ði; jÞ; ð4Þ

where b controls the level of subtraction.

2.3. Modularization and its effect upon information content

Palm-prints of a person possess some major and minor line structures along with some ridges and wrinkles. A person canbe distinguished from another person based on the differences of these major and minor line structures. Fig. 4a and b showssample palm-print images of two different people. The three major lines of the two people are quite similar. They differ only

cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmlm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Fig. 3. (a) Two sample palm-print images of the same person under different illumination; correlation of the 2D-DWT approximate coefficients of thesample palm-print images shown in Fig. 3a: (b) no illumination adjustment (c) illumination adjusted; and (d) Euclidian distance between 2D-DWTapproximate coefficients of sample palm-print images.

6 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

in minor line structure. In this case, if we considered the line structures of the two images locally, we may distinguish thetwo images.

For example, if we looked closely within the bordered regions of the palm-print images in Fig. 4a and b, they would seemdifferent. Moreover, it is evident that a small block of the palm-print image may or may not contain the major lines but willdefinitely contain several minor lines. These minor lines may not be properly enhanced or captured when operating on animage as a whole and may not contribute to the feature vector. Hence, in that case, the feature vectors of the palm-printsshown in Fig. 4a and b may be similar enough to be wrongfully identified as if they belong to the same person. Therefore,we propose to extract features from local zones of the palm-print images.

It is to be noted that within a particular palm-print image, the change in information over the entire image may not beproperly captured if the DWT features are selected considering the entire image as a whole. Even if it is performed, it mayoffer features with very low between-class separation. In order to obtain high within-class compactness as well as high be-tween-class separability, we modularize the palm-print images into some smaller segments, which are capable of extractingvariation in image geometry locally.

For the purpose of modularization, different non-uniform blocks have also been incorporated in the feature extractionprocess and it has been found that no significant change in recognition performance is achieved if non-uniform blocks wereemployed instead of ‘‘uniform blocks’’. Moreover, the precision in centering of the test and train images is a noteworthy as-pect. A significant misalignment of the positions of the test and train images may affect the recognition accuracy. In partic-ular, if features are extracted from the palm-print image as a whole instead of considering small spatial modules, therecognition accuracy would be severely affected. On the contrary, when a small block of an image is considered, the effectof the overall misalignment of the image would be much smaller, which can even be ignored depending on the degree ofmisalignment.

One way to capture the spatial variation pattern existing in a palm-print image is to measure the intensity variationamong different pixels of the image. It is of our interest to analyze the variation of such information content in cases, whereinstead of the whole image, different portions of the image are considered. For estimating the amount of information in agiven image or segment of an image, an entropy based measure of intensity variation is defined as [24]

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Palm−print image of person 1

20 40 60 80 100 120 140

20

40

60

80

100

120

140

Average entropy per segment using entire palm−print image as a whole

(c)50 100 150

20

40

60

80

100

120

140

(d)50 100 150

20

40

60

80

100

120

140

(f)0 50 100 150 200 250

0

50

100

150

200

250

(g)0 50 100 150 200 250

0

50

100

150

200

250

300

(h)0 50 100 150 200 250

7 7 7

0 20 40 60 80 100−1

0

1

2

3

4

5

6

Segment index

Ent

ropy

(i)0 20 40 60 80 100

−1

0

1

2

3

4

5

6

Segment index

Ent

ropy

(j)0 20 40 60 80 100

−1

0

1

2

3

4

5

6

0 10 20 50 40 50 60 70 80 90 100 110 120 130 140 1500

200

400

600

800

1000

1200

1400

Palm−print image of person 2

20 40 60 80 100 120 140

20

40

60

80

100

120

140

(a) (b)

Average entropy per segment using segmented palm−print image

(e)50 100 150

20

40

60

80

100

120

140

0

50

100

150

200

250

Segment index

Ent

ropy

(k)

Segment size (NxN pixels)

Ent

ropy

(l)

Fig. 4. (a)–(b) Sample palm-print images of two people. Square block contains portion of images without any minor line and with a minor line; (c)–(e)sample palm-print images of three different people, (f)–(h) corresponding histograms, (i)–(k) entropy distribution of different segments of palm-printimages corresponding to Fig. 4c–e and (l) variation of entropy with segment size.

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 7

Pleasefor pa

cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmlm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

8 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

Pleasefor pa

H ¼ �Xm

k¼1

pklog2 pk; ð5Þ

where the probabilities fpkgm1 are obtained based on the intensity distribution of the pixels of that particular image or seg-

ment of the image.Three different palm-print images are shown in Fig. 4c–e along with their corresponding histograms calculated from the

intensity distribution of those images in Fig. 4f–h. Based on these histograms, a general trend of the intensity distribution ofpalm-print images can be acquired. It is observed that the distribution follows an almost similar pattern for the three dif-ferent palm-print images. One can compute the information content in terms of entropy of the entire palm-images using(5). For the purpose of comparison, from the entropy of the entire image, the average entropy per segment eH is computedby taking into account the total number of segments to be used for modularization. In Fig. 4i, the average entropy per seg-ment eH computed for the image shown in Fig. 4c considering 100 segments each having a size of 15 � 15 is shown.

Next, we consider different small segments of the palm-print images and compute the entropy of each segment Hi basedon the histogram of corresponding segments. In Fig. 4i, the entropy values computed from different segments of the palm-print images shown in Fig. 4c are also plotted along with the average entropy per segment calculated using the entire palm-print image. Fig. 4i and k shows similar entropy measures for the other two palm images shown in Fig. 4d and e. It can beclearly observed that the entropy measures vary significantly among different palm-print image segments.

Moreover, the average value of the segmental entropies (shown in Fig. 4i–k as dot-dash lines) is much higher than theaverage entropy per segment eH computed from the entire palm-print image. This clearly gives an indication that, for featureextraction, instead of considering the entire image as a whole, modularization would be a better choice. However, the size ofthe module is also an important factor. In Fig. 4l, the variation of average entropy of a sample palm-print image with segmentsize is shown. It is clear that decreasing the size of the modules offers greater entropy values, i.e., variation in information,which is obviously desirable. However, if the modules were extremely small in size, it is quite natural that the small seg-ments will not be capable of exhibiting significant differences in different images.

2.4. Proposed wavelet domain dominant feature extraction

Instead of considering the DWT coefficients of the entire image, the coefficients obtained form each modules of the palm-print image are considered to form the feature vector of that image. However, if all of these coefficients were used, it woulddefinitely result in a feature vector with a very large dimension. In view of reducing the feature dimension, we propose toutilize wavelet coefficients, which are playing the dominant role in the representation of the image. In order to select thedominant wavelet coefficients, we propose to consider the frequency of occurrence of the wavelet coefficients as the deter-mining characteristic. It is expected that coefficients with higher frequency of occurrence would definitely dominate over allthe coefficients for image reconstruction and it would be sufficient to consider only those coefficients as desired features.One way to visualize the frequency of occurrence of wavelet coefficients is to compute the histogram of the coefficientsof a segment of a palm image. In order to select the dominant features from a given histogram, the coefficients having fre-quency of occurrence greater than a certain threshold value are considered.

It is intuitive that within a palm-print image, the image intensity distribution may drastically change at different local-ities. In order to select the dominant wavelet coefficients, if the thresholding operation were to be performed over the wave-let coefficients of the entire image, it would be difficult to obtain a global threshold value that is suitable for every local zone.Use of a global threshold in a palm-print image may offer features with very low between-class separation. In order to obtainhigh within-class compactness as well as high between-class separability, we have considered wavelet coefficients corre-sponding to the smaller spatial modules residing within a palm-print image, which are capable of extracting variation in im-age geometry locally. In this case, for each module, a different threshold value may have to be chosen depending on thewavelet coefficient values of that segment. We propose to utilize the coefficients (approximate and horizontal detail) withfrequency of occurrence greater than h% of the maximum frequency of occurrence for the particular module of the palm-print image and are considered as dominant wavelet coefficients and selected as features for the particular segment ofthe image. This operation is repeated for all the modules of a palm-print image.

Next, in order to demonstrate the advantage of extracting dominant wavelet coefficients corresponding to some smallermodules residing in a palm-print image, we conduct an experiment considering two different cases: (i) when the entirepalm-print image is used as a whole and (ii) when all the modules of that image are used separately for feature extraction.For these two cases, centroids of the dominant approximate wavelet coefficients obtained from several poses of two differentpeople (appeared in Fig. 5a) are computed and shown in Fig. 5b and c, respectively. It is observed from Fig. 5b that the fea-ture-centroids of the two people for different sample palm-print images are not well-separated and even for some imagesthey overlap with each other, which clearly indicates poor between-class separability. In Fig. 5c, it is observed that, irrespec-tive of the sample images, the feature-centroids of the two people maintain a significant separation indicating a high be-tween-class separability, which strongly supports the proposed local feature selection algorithm.

We have also considered dominant feature values obtained for various sample images of those two people in order todemonstrate the within class compactness of the features. The feature values, along with their centroids, obtained for thetwo different cases, i.e., extracting the features from the palm-print image without and with modularization, are shownin Fig. 5d and e, respectively. It is observed from Fig. 5d that the feature values of several sample palm-print images of

cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmlm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Palm−print image of Person 1

20 40 60 80 100 120 140

20

40

60

80

100

120

140

Palm−print image of Person 2

20 40 60 80 100 120 140

20

40

60

80

100

120

140

1 2 3 4 5 6 7 8 9100

150

200

250

300

350

Sample image no.

Fea

ture

cen

troi

d va

lue

Person 1Person 2

1 2 3 4 5 6 7 8 9210

220

230

240

250

Sample image no.

Fea

ture

cen

troi

d va

lue

Person 1Person 2

1 2 3 4 5 6 7 8−100

−50

0

50

100

Feature indexF

eatu

re v

alue

Person 1 feature centroid

Person 2 feature centroid

1 2 3 4 5 6 7 8−100

−50

0

50

100

Feature index

Fea

ture

val

ue

Person 1 feature centroid

Person 2 feature centroid

(a)

(b) (d)

(c) (e)

Fig. 5. (a) Sample palm-print images of two people; Feature centroids of different images for: (b) un-modularized palm-print image (c) modularized palm-print image; feature values for: (d) un-modularized palm-print image (e) modularized palm-print image.

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 9

the two different people are significantly scattered around the respective centroids resulting in a poor within-class compact-ness. On the other hand, it is evident from Fig. 5e that the centroids of the dominant features of the two different people arewell-separated with a low degree of scattering among the features around their corresponding centroids. Thus, the proposeddominant features extracted locally within a palm-print image offer not only a high degree of between-class separability butalso a satisfactory within-class compactness.

2.5. Reduction of the feature dimension

Principal component analysis (PCA) is a very well-known and efficient orthogonal linear transformation [25]. It reducesthe dimension of the feature space and the correlation among the feature vectors by projecting the original feature space intoa smaller subspace through a transformation. The PCA transforms the original p-dimensional feature vector into theL-dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector in eachcluster (L < p). PCA is theoretically the optimum transform for given data in the least square sense. For a data matrix, XT, withzero empirical mean, where each row represents a different repetition of the experiment, and each column gives the resultsfrom a particular probe, the PCA transformation is given by:

Pleasefor pa

YT ¼ XT W ¼ VRT ; ð6Þ

where the matrix R is an m � n diagonal matrix with nonnegative real numbers on the diagonal and WRVT is the singularvalue decomposition of X. If q sample palm-print images of each person are considered and a total of M dominant DWT coef-ficients (approximate and horizontal detail) are selected per image, the feature space per person would have a dimension of

cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmlm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

10 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

q �M. For the proposed dominant features, implementation of PCA on this feature space efficiently reduces the featuredimension without loosing much information. Hence, PCA is employed to reduce the dimension of the proposed featurespace.

2.6. Distance based palm-print recognition

In the proposed method, for the purpose of recognition using the extracted dominant features, a distance-based similaritymeasure is utilized. The recognition task is carried out based on the distances of the feature vectors of the training palm-images from the feature vector of the test palm-image. Given the m-dimensional feature vector for the kth sample imageof the jth person be {cjk(1),cjk(2), . . . ,cjk(m)} and a test sample image f with a feature vector {vf(1),vf(2), . . . ,vf(m)}, a similaritymeasure between the test image f of the unknown person and the sample images of the jth person, namely average sum-squares distance, D, is defined as

20

200

400

600

20

200

400

600

50

100

150

50

100

150

Fig. 6.databas

Pleasefor pa

Dfj ¼

1q

Xq

k¼1

Xm

i¼1

jcjkðiÞ � v f ðiÞj2; ð7Þ

where a particular class represents a person with q number of sample palm-print images. Therefore, according to (7), giventhe test sample image f, the unknown person is classified as the person j among the p number of classes when

Dfj 6 Df

g ; 8j–g and 8g�f1;2; . . . ; pg: ð8Þ

3. Experimental results

Extensive simulations are carried out in order to demonstrate the effectiveness of the proposed method of palm-print rec-ognition using the palm-print images of several well-known databases. Different analyses showing the effectiveness of theproposed feature extraction algorithm have been shown. The performance of the proposed method in terms of recognitionaccuracy is obtained and compared with those of some recent methods [11,13,26–28].

3.1. Palm-print databases used in simulation

In this section, palm-print recognition performance obtained by different methods has been presented using two stan-dard databases, namely, the PolyU palm-print database (version 2) (available at http://www4.comp.polyu.edu.hk/biomet-rics/) and the IITD palm-print database (available at http://www4.comp.polyu.edu.hk/csajaykr/IITD/Database_Palm.htm).In Fig. 6a and b, sample palm-print images from the PolyU database and the IITD database are shown, respectively. The PolyUdatabase (version 2) contains a total of 7752 palm-print images of 386 people. Each person has 18–20 different sample palm-print images taken in two different instances. The IITD database, on the other hand, consists a total of 2791 images of 235people, each person having 5–6 different sample palm-print images for both left hand and right hand. It can be observed

0 400 600 800 200 400 600 800

200

400

600200 400 600 800

200

400

600200 400 600 800

200

400

600200 400 600 800

200

400

600

0 400 600 800 200 400 600 800

200

400

600200 400 600 800

200

400

600200 400 600 800

200

400

600200 400 600 800

200

400

600

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

100 200 300

50100150200250

50 100 150 50 100 150

50

100

15050 100 150

50

100

15050 100 150

50

100

15050 100 150

50

100

150

50 100 150 50 100 150

50

100

15050 100 150

50

100

15050 100 150

50

100

15050 100 150

50

100

150

100 200

50

100

150

200100 200

50

100

150

200100 200

50

100

150

200100 200

50

100

150

200100 200

50

100

150

200

100 200

50

100

150

200100 200

50

100

150

200100 200

50

100

150

200100 200

50

100

150

200100 200

50

100

150

200

(a) (b)

(c) (d)

Sample palm-print images from: (a) the IITD database and (b) the PolyU database; sample palm-print images after cropping: (c) from the IITDe and (d) from the PolyU database.

cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmlm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 11

from Fig. 6a and b that not all the portions of the palm-print images are required to be considered for feature extraction [2].The portions of the images containing fingers and the black regions are discarded from the original images to form the re-gions of interest (ROI) as shown in Fig. 6c and d.

3.2. Performance comparison

In the proposed method, dominant features (approximate and horizontal detail 2D-DWT coefficients) obtained from allthe modules of palm-print image are used to form the feature vector of that image and feature dimension reduction is per-formed using PCA. The recognition task is carried out using a simple Euclidean distance based classifier as described in Sec-tion 2.6. The experiments were performed following the leave-one-out cross validation rule.

For simulation purposes, the module size for the PolyU database and the IITD database has been chosen as 32 � 32 pixelsand 16 � 16 pixels, respectively. The dominant wavelet coefficients corresponding to all the local segments residing in thepalm-print images are then obtained using h = 20. For the purpose of comparison, recognition accuracy obtained using theproposed method along with those reported in [11,13,26–28] are listed in Table 1. It is found from the table that the recog-nition accuracy obtained by the proposed method is still very competitive in comparison to that obtained by other methods.The performance of the proposed method is also very satisfactory for the IITD database (for both left hand and right handpalm-print images). An overall recognition accuracy of 99.82% is achieved, which is higher than that of the method presentedin [26].

The simulations were performed using MATLAB R2009a on an Intel Core 2 Duo 2.00 GHz machine with Windows XP SP3operating system and 4 GB of RAM. The training phase takes on an average 120 s for the PolyU database and 100 s for the IITDdatabase. The testing phase takes, on an average, 1.07 ms per image for both the databases.

One possible reason of getting errors in the palm-print recognition process using the proposed method could be the intro-duction of discontinuities among major lines as a result of modularization. For example, In Fig. 7a, a sample palm-print im-age is shown with two consecutive segments highlighted. In Fig. 7b and c enlarged versions of these segments are shown.Note that, a major line has fallen in the border of the two segments, where it can be clearly seen that the major line is dividedin-between the two segments.

As mentioned earlier, dominant features are extracted from the small modules of the palm-print images. Next, we intendto demonstrate the effect of variation of module size upon the recognition accuracy obtained by the proposed method. InFig. 8a, the recognition accuracies obtained for different module sizes are shown. It is observed from the figure that, unlessthe module size is too small or too large, the recognition accuracies do not reduce beyond acceptable level. The best recog-nition accuracy is achieved for a module size of 32 � 32 pixels, which is an indication that variations in the image geometryand intensity, i.e., variations in local information are captured more successfully in case of moderate sized segments. How-ever, the choice of module size depends upon the size of the acquired palm-print image. Note that, in case of considering theentire image as a whole instead of any modularization, the recognition accuracy drastically falls to a value less than 10% forboth the databases, as expected.

In view of reducing computational complexity, dimension reduction of the feature space plays an important role. In theproposed method, the task of feature dimension reduction is performed using PCA. In Fig. 8b, the effect of dimension reduc-tion upon recognition accuracy is shown. It is found from this figure that even for a very low feature dimension, the recog-nition accuracies remain very high for both the databases. For the experiments performed, 20 most significant PCAcoefficients are retained for both the databases. That is, the feature vector for each palm-print image is of the size 1 � 20.It should be noted that, if the entire palm-print image were used as feature, the feature vector sized for each image wouldbe 1 � 40,000 for the PolyU database. The proposed dominant feature extraction algorithm thus utilizes a feature vector thatis 0.05% of its spatial-domain counterpart. It is to be mentioned that for the purpose of practical implementation, one canconsider the lifting wavelet transform which provides faster in-place calculation [29].

For the case of choosing dominant spectral coefficients based on the thresholding criterion in the proposed method, theeffect of changing the threshold values, i.e., incorporating different amount of top approximate and horizontal detail coeffi-cients, has been investigated. In Fig. 8c, variation of recognition accuracies with different threshold values is shown. It can beobserved from the figure that as the amount of top coefficients decreases, the recognition accuracy also decreases, althoughthe recognition accuracies are sufficiently high even for very low amount of coefficients utilized. Among a large number ofwavelets, most commonly used orthogonal wavelet, namely Daubechies’ wavelet is used in this paper [30]. The effect of

Table 1Comparison of recognition accuracies.

Method PolyU database (%)

Proposed method 99.95Method [11] 97.50Method [28] 98.00Method [13] 99.87Method [26] 99.04Method [27] 99.95

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

Sample palm−print image

with highlighted segments

5 10 15 20 25 30 35 40 45 50

5

10

15

20

25

30

35

40

Highlighted segments zoomed

5 10 15 20 25 30 35 40 45 50

5

10

15

20

25

30

35

40

(a)

(b) (c)

Fig. 7. (a) Sample palm-print image where two consecutive segments highlighted (b) and (c) zoomed versions of the segments.

1 2 3 4 50

20

40

60

80

100

Threshold (%)

Rec

ogni

tion

acc

urac

y (%

)

8 16 32 640

20

40

60

80

100

Module size (W x W pixels)

Rec

ogni

tion

acc

urac

y (%

)

8 10 20 300

20

40

60

80

100

Reduced feature dimension

Rec

ogni

tion

acc

urac

y (%

)

db1 db2 db3 db4 db5 db6 db7 db8 db9 db100

20

40

60

80

100

Mother wavelet

Rec

ogni

tion

acc

urac

y (%

)

(a) (b) (c)

(d) (e)

3 4 5 6 7 8 994

95

96

97

98

99

100

Number of samples used in training

Rec

ogni

tion

Acc

urac

y (%

)

Fig. 8. Variation of recognition accuracy with: (a) module size, (b) reduced feature dimension, (c) threshold values, (d) different types of Daubechieswavelets, (e) the number of training samples used for the PolyU database.

12 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 13

choosing different types of Daubechies wavelets (in terms of use of number of vanishing moments, i.e. db1–db10) upon therecognition accuracy has been investigated. It is to be noted that the db1 wavelet is also known as the Haar wavelet. InFig. 8d, the variation of recognition accuracies with different mother wavelets is shown. It can be observed from the figurethat db1, db4, db9, and db10 provide almost similar recognition accuracies. In our experiments, we have used the db4 wave-let. In order to obtain shift and rotation invariant wavelet transform one may employ undecimated DWT approaches [22,23].

The variation of recognition accuracy with the number of training samples used in the recognition process is investigatedfor both the databases used in the manuscript. The recognition performance considering different number of training sam-ples for the PolyU database is shown in Fig. 8e. From the figure, it can be observed that, with the decrease in number of train-ing samples, the recognition accuracy does not vary significantly. Similar results are found for the IITD database, as well. Tomaintain acceptable recognition accuracy while reducing the computational burden, we have used nine samples and fivesamples per person for training for the PolyU database and the IITD database, respectively.

Illumination adjustment is generally carried out by subtracting a certain percentage of the average intensity level fromthe palm image, which may provide better feature quality depending on the level of adjustment. The variation of recognitionaccuracy with different level of illumination adjustment is investigated and it is observed that the recognition accuracy isincreased with the level of subtraction up to a certain limit. From our extensive experimentation on several images, it isfound that 50% of the average intensity can be chosen as an optimum level of illumination adjustment, which provides asatisfactory performance.

Although, in the proposed method, no additional blocks are used to tackle the effect of noise, the variation of the recog-nition performance with the presence of different amount of additive Gaussian noise is also investigated. As expected, it isfound that with the increase in noise strength that is with decreasing signal-to-noise-ratio (SNR), the recognition accuracyobtained by the proposed method decreases.

2 4 6 8 10 12 14 16 180.006

0.008

0.01

0.012

0.014

0.016

0.018

0.02

Reduced Feature Dimension

Fal

se P

osit

ive

Rat

e

2 4 6 8 10 12 14 16 18

0.4

0.5

0.6

0.7

0.8

0.9

1

Reduced Feature Dimension

Fal

se R

ejec

tion

Rat

e

2 4 6 8 10 12 14 16 18

0.4

0.5

0.6

0.7

0.8

0.9

1

Reduced Feature Dimension

Tot

al E

rror

Rat

e

2 4 6 8 10 12 14 16 180.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Reduced Feature Dimension

Gen

uine

Acc

ept

Rat

e

0.006 0.008 0.01 0.012 0.014 0.016 0.0180.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Sens

itiv

ity

(a) (b)

(c) (d)

(e)

Fig. 9. Performance evaluation of the proposed method: variation of (a) FPR, (b) FRR (c) TER and (d) GAR with reduced feature dimension; (e) sensitivity vs(a-specificity) or false positive rate curve.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

14 H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx

In the proposed method, the most conventional way of illumination adjustment, that is subtracting a certain percentageof the average intensity level from the entire palm image, is employed. Generally, this global approach of illuminationadjustment may provide better feature quality depending on the level of adjustment. The variation of recognition accuracywith different level of illumination adjustment is investigated and it is found that the recognition accuracy is increased withthe level of subtraction up to a certain limit. It is experimentally found that 60% of the average intensity can be chosen as anoptimum level of illumination adjustment, which provides a satisfactory performance.

In order to evaluate the recognition performance, apart from the recognition accuracy mentioned in Table 1, a few moreperformance indices, which are commonly used in biometric applications, are taken into consideration, such as False PositiveRate (FPR), False Rejection Rate (FRR), Total Error Rate (TER), Genuine Accept Rate (GAR) and Receiver Operating Character-istic (ROC) (variation of sensitivity with respect to (1-specificity) or FPR) [6,31,32]. The variation of FPR, FRR, TER and GARwith the reduced feature dimension is provided in Fig. 9a–d. From these curves, it is evident that both the FPR and the FRRreduce to very small values as the feature dimension exceeds 14, whereas the GAR increases to greater values. With suchreduced feature dimensions, as seen from Fig. 9e, the sensitivity is high with low specificity. Apart from the high recognitionaccuracy, based on these new performance measures, it is therefore found that the proposed method offers very good rec-ognition performance even for a very low feature dimension.

4. Conclusions

In the proposed DWT-based palm-print recognition scheme, dominant features are extracted separately from each of themodules obtained by image-segmentation, instead of operating on the entire palm-print image at a time. An entropy basedmeasure showing the effect of modularization of the palm-print images upon the information content has been presented. Ithas been shown that the proposed dominant features, that are extracted from the sub-images, attain better discriminatingcapabilities because of modularization of the palm-print image. The effect of variation of module size upon recognition per-formance has been investigated and found that the recognition accuracy does not depend on the module size unless it isextremely large or small. The effect of using different types of Daubechies wavelets (in terms of use of number of vanishingmoments, i.e., db1–db10) for the purpose of feature extraction has been also investigated. Variation of recognition accuracywith different illumination adjustment, presence of different noise levels and different number of training samples havebeen discussed. The proposed feature extraction scheme is shown to precisely capture local variations that exist in the majorand minor lines of palm-print images, which plays an important role in discriminating different people. Moreover, it utilizesa very low dimensional feature space for the recognition task, which ensures lower computational burden. For the task ofclassification, a simple Euclidean distance based classifier has been employed and it is found that, because of the qualityof the extracted features, such a simple classifier provides a very good recognition performance and there is no need to em-ploy any complicated classifier. It has been observed from our extensive simulations on different standard palm-print dat-abases that the proposed method, in comparison to some of the recent methods, provides excellent recognition performance,both in terms of recognition accuracy and several ROC curves.

Acknowledgments

The authors would like to express their sincere gratitude towards the authorities of the Department of Electrical and Elec-tronic Engineering and Bangladesh University of Engineering and Technology (BUET) for providing constant supportthroughout this research work.

References

[1] Jain A, Ross A, Prabhakar S. An introduction to biometric recognition. IEEE Trans Circ Syst Video Technol 2004;14:4–20.[2] Kong A, Zhang D, Kamel M. A survey of palmprint recognition. Pattern Recognit 2009;42:1408–18.[3] Kong A, Zhang D, Lu G. A study of identical twins palmprint for personal verification. Pattern Recognit 2006;39:2149–56.[4] Han C, Cheng H, Lin C, Fan K. Personal authentication using palm-print features. Pattern Recognit 2003;36:371–81.[5] Fairhurst M, Abreu M. Balancing performance factors in multisource biometric processing platforms. IET Signal Process 2009;3:342–51.[6] Wu X, Zhang D, Wang K. Palm line extraction and matching for personal authentication. IEEE Trans Syst Man Cybernet Part A: Syst Humans

2006;36:978–87.[7] Wu X, Wang K, Zhang D. Fuzzy direction element energy feature (FDEEF) based palmprint identification. In: Proc int conf pattern recognition, vol. 1. p.

95–8.[8] Kung S, Lin S, Fang M. approA neural network ach to face/palm recognition. In: Proc IEEE workshop neural networks for signal processing. p. 323–32.[9] Connie T, Jin A, Ong M, Ling D. An automated palmprint recognition system. Image Vis Comput 2005;23:501–15.

[10] Jing XY, Zhang D. A face and palmprint recognition approach based on discriminant DCT feature extraction. IEEE Trans Syst Man Cybernet2004;34:167–88.

[11] Dale MP, Joshi MA, Gilda N. Texture based palmprint identification using DCT features. In: Proc int conf advances in pattern recognition, vol. 7. p. 221–4.

[12] Imtiaz H, Fattah S. A face recognition scheme using wavelet-based local features. In: IEEE symp computers informatics (ISCI). p. 313–6.[13] Zhang D, Guo Z, Lu G, Zhang L, Zuo W. An online system of multispectral palmprint verification. IEEE Trans Instrum Measure 2010;59:480–90.[14] Li W, You J, Zhang D. Texture-based palmprint retrieval using a layered search scheme for personal identification. IEEE Trans Multimedia

2005;7:891–8.[15] Lu J, Zhang E, Kang X, Xue Y, Chen Y. Palmprint recognition using wavelet decomposition and 2d principal component analysis. In: Proc int conf

communications, circuits and systems proceedings, vol. 3. p. 2133–6.[16] Guo Z. Feature band selection for online multispectral palmprint recognition. IEEE Trans Inform Forensics Sec 2012;7:1094–9.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006

H. Imtiaz, S. Anowarul Fattah / Computers and Electrical Engineering xxx (2013) xxx–xxx 15

[17] Ekinci M, Aykut M. Palmprint recognition by applying wavelet-based kernel PCA. J Comput Sci Technol 2008;23:851–61.[18] Kekre H, Sarode Tanuja K, Tirodkar A. A study of the efficacy of using wavelet transforms for palm print recognition. In: Proc int conf computing,

communication and applications. p. 1–6.[19] Zhang Y, Zhao D, Sun G, Guo Q, Fu B. Palm print recognition based on sub-block energy feature extracted by real 2d-gabor transform. In: Proc int conf

artificial intelligence and computational intelligence, vol. 1. p. 124–8.[20] Li W, Zhang D, Zhang L, Lu G, Yan J. 3-D palmprint recognition with joint line and orientation features. IEEE Trans Syst Man Cybernet Part C

2011;41:274–9.[21] Zhang X-P, Tian L-S, Peng Y-N. From the wavelet series to the discrete wavelet transform-the initialization. IEEE Trans Signal Process 1996;44:129–33.[22] Percival DB, Walden AT, Gill R. Wavelet methods for time series analysis. Cambridge Ser Statist Probab Math 2000.[23] Harang R, Bonnet G, Petzold LR. WAVOS: a MATLAB toolkit for wavelet analysis and visualization of oscillatory systems. BMC Res Notes 2012;5.[24] Loutas E, Pitas I, Nikou C. Probabilistic multiple face detection and tracking using entropy measures. IEEE Trans Circ Syst Video Technol

2004;14:128–35.[25] Jolloffe I. Principal component analysis. Berlin: Springer-Verlag; 1986.[26] Kumar A, Wong D, Shen H, Jain A. Personal verification using palmprint and hand geometry biometric. Lecture notes in computer science. Springer;

2003.[27] Yue F, Zuo W, Zhang D, Wang K. Competitive code-based fast palmprint identification using a set of cover trees. Opt Eng 2009;48:1–7.[28] Lu J, Zhao Y, Hu J. Enhanced gabor-based region covariance matrices for palmprint recognition. Electron Lett 2009;45:880–1.[29] Sweldens W. The lifting scheme: a construction of second generation of wavelets. SIAM J Math Anal 1998;29:511–46.[30] Daubechies I. Ten lectures on wavelets. In: Proc CBMS-NSF conference series in applied mathematics. p. 1–6.[31] Chang C-I. Multiparameter receiver operating characteristic analysis for signal detection and classification. IEEE Sens J 2010;10:423–42.[32] Toh K-A, Kim J, Lee S. Biometric scores fusion based on total error rate minimization. Pattern Recognit 2008;41:1066–82.

Hafiz Imtiaz was born in Rajshahi, Bangladesh on January 16, 1986. He received his M.Sc. and B.Sc. degrees in Electrical and Electronic Engineering (EEE)from Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh in July 2011 and March 2009, respectively. He is presently workingas an Assistant Professor in the Department of EEE, BUET.

Dr. Shaikh Anowarul Fattah received B.Sc. and M.Sc. degrees from BUET, Bangladesh and Ph.D. degree from Concordia University, Canada. He was aPostdoctoral-Fellow at Princeton University, USA. He received Dr. Rashid Gold Medal, 2009 Distinguished Doctoral Dissertation Prize, 2007 URSI CanadianYoung Scientist Award and First prize in SYTACOM 2008. Currently he is as an Associate Professor in EEE Department, BUET.

Please cite this article in press as: Imtiaz H, Anowarul Fattah S. A histogram-based dominant wavelet domain feature selection algorithmfor palm-print recognition. Comput Electr Eng (2013), http://dx.doi.org/10.1016/j.compeleceng.2013.01.006