7
Proceedings of the 2012 Inteational Conference on Machine Leaing and Cybernetics, Xian, 15-17 J uly, 2012 FUNDUS PHASE CONGRUENCY BASED BIOMETRICS SYSTEM M ISLAMUDDIN AHMED\ BRUCE POONI,2, MD. JEWEL\ M ASHRAFUL AMINI ,3, HONG YAN2,4 1 Computer Vision and Cybeetics Group, SECS, Independent University, Bgladesh 2 School of Elecical & Information Engineering, University of Sydney, NSW 2006, Ausalia 3 School of Engineering and Computer Science, Independent University, Bangladesh 4 Department of Eleconic Engineering, City University of Hong Kong EMAIL: [email protected].bruce[email protected].jewelbcse@gmail.com.aminmdashraful@ieee.org. [email protected].hk Abstract: This paper presents a novel biometric authentication method using retinal fundus images. Phase congruency is computed on both RGB and YCbCr channel for vessel segmentation and the Fourier components are used to detect edges. By applying pair threshold values on the phase congruent image, retinal blood vessel tree is acquired. Three different features are used and all combinations of the features are experimented to find which combination produces the best authentication accuracy. Two separate experiments are done, EXP-l using 18 images from 6 individuals and EXP-2 using 18 (authored) plus 547 (intruder) images, each from a separate individual. For similarity matching, 2-D correlation coefficient measure is used. In EXP-l and EXP-2, maximum accuracy achieved was 94.44% and 93.4% respectively and both with YCbCr images. YCbCr color space outperformed RGB color space by a small margin. For EXP-l and EXP-2, the average time taken per image was 12.81 and 12.92 seconds respectively. Keywords: Biometric authentication; Bifurcation point; Feature point matching; Similarity measure; Optical disc 1. Introduction Conolling access to crucial locations and to information is a very vital research topic in today's world. Previously limitations to access were and still could be achieved by means of the possession of items (e.g. identity cards) or possession of knowledge (e.g. passwords). However, these security measures gradually seemed to become obsolete as cases of forgeries in the above mentioned security measure became easier and recurrent. In the cuent crisis situation, biomeic security concept was proposed d this proved to be worthwhile and most effective. Although retinal blood vessel based authentication is relatively new, it was C. Simon and I. Goldstein [1] who in 1935 first showed that retinal blood vessels were unique for each individual and even unique and distinct for twins. Furthermore, the location of the 978-1-4673-1487-9/12/$31.00 ©2012 IEEE retina itself gives the authentication process a very high degree of security and makes it impossible to forge. Surisingly, very few noteworthy works have been done on retinal image based authentication system. Before feature points are identified and located in retinal images, it is essential to isolate the retinal blood vessels for the image. In the method proposed by Ortega et al. [2]-[4], they used Level Set Exinsic Curvature (LSEC) to reieve the retinal blood vessels ee om retinal crease model. However, since it gives rise to some discontinuities, a specialized form of LSEC called multi-local level set exinsic curvature (MLSEC) was used which had invariance properties. In [2], the feature points are the ridge endings and ridge bircation om vessels obtained om a crease model of e retinal vessels. In their consecutive work [4], the whole vessel ee is not used and instead the bircation and crossover points e used as feature points. In Marino et al. [5]-[6] the retinal blood vessels e represented by means of crest and valley. They used the same technique as used in [2] -[4]. In the latter work [6], the ouuts of MLSEC is rther improved by prefilation of image gradient vector field using sucture tensor analysis and by discarding creaseness of isoopic areas by means of the computation of confidence measure. The whole blood vessel tree is used as feature in their authentication system. Alonso-Montes et al. [7] used pixel parallel vessel ee exaction algorithm. The algorithm uses active contour evolution technique also known as pixel level snake. The images are filtered, initial contour and exteal potential image are computed. The images are then skeletonised. Pixel-processor array is used to extract the retinal blood vessels ee. Lalita et al. [8] used minutiae centered region encoding used in finger print. Location and orientation atibutes of minutiae are the main feature criteria. In the work presented by Raiyan et al. [9], cenoids of each segment are used as feature points. Matching is done using data of all segments in 1668

[IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

FUNDUS PHASE CONGRUENCY BASED BIOMETRICS SYSTEM

M ISLAMUDDIN AHMED\ BRUCE POONI,2, MD. JEWEL\ M ASHRAFUL AMINI ,3, HONG YAN2,4

1 Computer Vision and Cybernetics Group, SECS, Independent University, Bangladesh 2 School of Electrical & Information Engineering, University of Sydney, NSW 2006, Australia

3School of Engineering and Computer Science, Independent University, Bangladesh 4 Department of Electronic Engineering, City University of Hong Kong

EMAIL: [email protected]@[email protected]@ieee.org. [email protected]

Abstract: This paper presents a novel biometric authentication

method using retinal fundus images. Phase congruency is

computed on both RGB and YCbCr channel for vessel

segmentation and the Fourier components are used to detect

edges. By applying pair threshold values on the phase congruent

image, retinal blood vessel tree is acquired. Three different

features are used and all combinations of the features are

experimented to find which combination produces the best

authentication accuracy. Two separate experiments are done,

EXP-l using 18 images from 6 individuals and EXP-2 using 18

(authorized) plus 547 (intruder) images, each from a separate

individual. For similarity matching, 2-D correlation coefficient measure is used. In EXP-l and EXP-2, maximum accuracy

achieved was 94.44% and 93.4% respectively and both with YCbCr images. YCbCr color space outperformed RGB color

space by a small margin. For EXP-l and EXP-2, the average

time taken per image was 12.81 and 12.92 seconds respectively.

Keywords: Biometric authentication; Bifurcation point; Feature point

matching; Similarity measure; Optical disc

1. Introduction

Controlling access to crucial locations and to information is a very vital research topic in today's world. Previously limitations to access were and still could be achieved by means of the possession of items (e.g. identity cards) or possession of knowledge (e.g. passwords). However, these security measures gradually seemed to become obsolete as cases of forgeries in the above mentioned security measure became easier and recurrent. In the current crisis situation, biometric security concept was proposed and this proved to be worthwhile and most effective. Although retinal blood vessel based authentication is relatively new, it was C. Simon and I. Goldstein [1] who in 1935 first showed that retinal blood vessels were unique for each individual and even unique and distinct for twins. Furthermore, the location of the

978-1-4673-1487-9/12/$31.00 ©2012 IEEE

retina itself gives the authentication process a very high degree of security and makes it impossible to forge.

Surprisingly, very few noteworthy works have been done on retinal image based authentication system. Before feature points are identified and located in retinal images, it is essential to isolate the retinal blood vessels for the image. In the method proposed by Ortega et al. [2]-[4], they used Level Set Extrinsic Curvature (LSEC) to retrieve the retinal blood vessels tree from retinal crease model. However, since it gives rise to some discontinuities, a specialized form of LSEC called multi-local level set extrinsic curvature (MLSEC) was used which had invariance properties. In [2], the feature points are the ridge endings and ridge bifurcation from vessels obtained from a crease model of the retinal vessels. In their consecutive work [4], the whole vessel tree is not used and instead the bifurcation and crossover points are used as feature points.

In Marino et al. [5]-[6] the retinal blood vessels are represented by means of crest and valley. They used the same technique as used in [2] -[4]. In the latter work [6], the outputs of MLSEC is further improved by pre filtration of image gradient vector field using structure tensor analysis and by discarding creaseness of isotropic areas by means of the computation of confidence measure. The whole blood vessel tree is used as feature in their authentication system. Alonso-Montes et al. [7] used pixel parallel vessel tree extraction algorithm. The algorithm uses active contour evolution technique also known as pixel level snake. The images are filtered, initial contour and external potential image are computed. The images are then skeletonised. Pixel-processor array is used to extract the retinal blood vessels tree.

Lalita et al. [8] used minutiae centered region encoding used in finger print. Location and orientation attributes of minutiae are the main feature criteria. In the work presented by Raiyan et al. [9], centroids of each segment are used as feature points. Matching is done using data of all segments in

1668

Page 2: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

the same angular section. In the work by Farzin et al. [3], the optical disk (OD) is

localized using a template matching technique on the retinal images. Original image is divided by the correlation image obtained to achieve a new image which does not contain the OD. The vessel/background contrast is enhanced using a new local processing operation based on statistical properties of the resulted image, and finally, a binary image containing blood vessels is resulted by histogram thresholding of the contrast enhanced image. In Oinonen et al. [10], they used multi-scale enhancement approach by Frangi et al. [11]. In this technique, vessels are enhanced based on the vesselness measure obtained on the basis of the Eigen values of the Hessian. Vessels are extracted using p-tile thresholding, where exactly p% of pixels is always set as vessel pixels. Bifurcation and crossover point are used as feature.

2. The Authentication System

Retinal fundus images are usually color images and often in RGB format. Such a color image is composed of 3 separate color channels, Red, Green and Blue. For our purpose, green channel image is the best since red channel image is too bright, blue channel image is too dark and green channel provides better contrast with background in compared to other two channels. We have also considered the YCbCr color band, which is also comprised of 3 channels. The first channel of the band provides good image quality for such experimentation. Figure. 1 shows the selected channels of RGB and YCbCr .

I Figure. 1. (lst) RGB retiual fuudus image, (2nd) Image in Green channel, (3rd) YCbCr retinal fundus image, (4th) Image in the Y channel.

2.1 Optical disc detection

The first step in the procedure is the detection of optical disc. Optical disc is a bright circular (almost) region in a retinal fundus image. It is the region where retinal blood vessels originates and hence can be considered as a very significant landmark in retinal fundus image. Hence, it is used as a reference point for our feature extraction. Optical disc center is detected using a procedure proposed in [12].

2.2 Vessel segmentation

After optical disc center is detected, the next step in the

procedure is segmentation of retinal blood vessels from background image. We have used the concept of phase congruency for vessel segmentation, proposed in [13], because to our knowledge it is the fastest vessel segmentation technique.

Phase congruency is a dimensionless quantity that is invariant to changes in image brightness or contrast. It provides an absolute measure of the significance of feature points. The phase congruency of an image is acquired using Log-Gabor wavelets [14].

In Figure. 2 (left), an example of the Fourier decomposition of a square wave is provided. All the Fourier components are sine waves in this figure, and are exactly in phase at the point of step at an angle of 00 or 1800 depending on whether the step is upward or downward. At this point, square wave signal has the highest phase congruency and at all other points phase congruency is comparatively low. This is the main idea behind using phase congruency as edge detector. Hence, degree of phase congruency is independent of the overall magnitude of the signal. This provides invariance to variations in image illumination and/or contrast. Values of phase congruency vary from a maximum of 1 (indicating a very significant feature) down to 0 (indicating no significance). In other words, local energy model of feature detection requires that features are perceived at points of maximum phase congruency in an image [15]. To apply phase congruency measure to a 2D signal such as image, different orientations of the filters are considered to acquire edges, lines and other features pointing to different directions. The output of all orientations are summed and normalized by dividing with sum of amplitudes over all orientations and scales at each location to acquire the phase congruency of a 2D signal. This is specified in the following equation :-

where 0 is orientation and s is scale of the wavelet applied to analyze a 2D signal (image). PC(x,y) is the measure of phase

congruency at location (x, y) [14]. The optimal number of filter orientations is 8 and

corresponding angles of Log-Gabor filters are

{0, lr/8, lr/4 , 3 lr/8, lr/2 , 5 lr/8 , 3 lr/4, 7 lr/8} for the proposed

method. For each scale, filters with 8 angles are fabricated and takes about lOs to acquire phase congruency using 3 scales. If the filters are pre-computed and kept in the memory, the time reduces to 5s. Figure. 2 (right) shows 2 phase congruent images.

1669

Page 3: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

Figure. 2 (left) A square wave (solid line) and its Fourier decomposition (dashed and dotted sine waves); (right) Two phase congruent images obtained using 8 different orientations.

2.3 Down sampling

After we have acquired phase congruent image, we need to apply down-sampling in order to obtain the binary image containing blood vessels. For the phase congruent image, each pixel has intensity value in the range of 0 and 1. The down-sampling procedure works as a high pass filter and forms a binary image of the same dimension as the phase congruent image, where pixels with intensity value above a threshold value is set to be 1. Two thresholding values are used in pair. For example, let threj and thre2are 2 threshold values where threj > thre2. In the entire phase congruent image, pixel with intensity value threj is located and assigned a value 1 in the binary image. In addition to this, 8-connectivity is used and any pixels in that neighborhood of thre], has a value greater than or equal to thre20 is also assigned a value 1. Figure. 3 shows some binary images where the blood vessels are segmented using different threshold values.

For both experiments, down-sampling rates were varied starting from 0.40 to 0.60 and incremented by 0.01. Hence the fIrst down-sampling rate was 0.40-0.41; next one was 0.41-0.42 and repeated. Accuracies were computed for all the 21 down-sampling value pairs to verify which sampling rate produces optimum accuracies.

Figure. 3 Binary images obtained after down-sampling the phase congruent images. Each image shows the segmented blood vessels in white. Threshold pair value for 1st column is (0.40, 0.41), 2nd column is (0.45,0.46), 3rd column is (0.50, 0.51), 4th column is (0.57, 0.58) and 5th column is (0.59, 0.60).

2.4 Feature Extraction

In our proposed method, we have used a combination of multiple features. The features that have been used are bifurcation (branch) point distance ratio, mean intensity spread of the optical disc (novel) and circular segment around the optical disc. The features used in the proposed method are explained in preceding sections.

2.4.1 Bifurcation (branch) point distance ratio (B)

A bifurcation point or branch point is a point in the retinal blood vessel tree where a single vessel becomes divided into two or more sub-vessels. It is the common junction point where new vessels are formed from parent vessels. Before bifurcation (branch) points are detected the vessels need to be skeletonized. It is seen that if skeletonization is applied directly on the down-sampled image, the output image is much distorted and hence it is essential that the down-sampled image be smoothed out prior to skeletonization. For this we have used a tailor-made method of dilation. A square window of size Wsize is correlated with the binary image. If the correlation value is greater than the window size W size, the intensity value of the center pixel is set to 1. This is done for all the pixels and new image formed. When the process is complete, we get a smooth image of retinal blood vessels. By experimenting us�ng different window sizes, W size varying from 3 to 11, usmg only odd numbers as window size, Wsize = 5 produces the best result by not eroding the vessels too much that they become disconnected and also reducing the number of noisy pixels.

Once skeletonized image is obtained, next step is to apply the mask to locate the bifurcation points. We have seven (5x5) binary masks but it was uncertain which one would be most appropriate for bifurcation point location. In addition to this, we do not know the threshold value for the masks. Hence we devise an experiment to fInd out the most appropriate mask and threshold value. DRIVE [16] database has 40 color retinal images and along with ground truth blood vessels images, manually marked by expert. Bifurcation points are manually marked and the coordinates stored. Now those 7 masks are applied separately to each of the skeletonized ground truth images of DRIVE database. The masks are shown in Figure. 4.

M&immmmH Top to Bottom Boltom to top Left aligned Right aligned Vertical Half Horizontal Half

branch mask branch mask horizontal branch horizontal branch Mask Mask Full Mask

T • B B· T mask mask VHALF HHAlF L·R R·L

Figure. 4 Seven (5x5) binary masks used to identify and locate bifurcation points

In addition to that the results for each external threshold values of quadrature mask such as T-B and B-T L-R and R-L T-B, B-T, L-R and R-L masks and half mask s�ch as V-Half and H-Half are combined and tested. The masks are correlated with each pixel of the images. Different threshold values are applied. If the correlated value is equal or above the external threshold value then that coordinate is marked as

1670

Page 4: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

a bifurcation point. This procedure detects the probable bifurcation points. Now we need to compare these points with the manually detected points to verify the authenticity of the bifurcation points.

For each manually detected bifurcation points, a varying sized window is established centering that point. The Euclidian distance between the manually detected point and all the probable bifurcation candidate points are computed. Again a threshold, known as internal threshold, is applied to choose the best bifurcation point. The point whose distance is equal or less than internal threshold value is chosen to be the detected bifurcation point and regarded as a hit. All the other points inside the box are rejected. This is repeated for all the manually detected points. The value of the internal threshold varied with the external threshold value. Internal threshold value can be expressed using (2).

Internal _ threshold = L External _ threshold /2 J -1 (2)

For computing accuracy as to which mask performs best and under which external threshold value, the following measurements are done : TP = true positive; Actual bifurcation points, FP = false positive; Bifurcation points that are not bifurcation points but are claimed to be. This is computed by subtracting the total number points that fell inside the window from total number of points marked as possible bifurcation points. FN = false negative; points that are actually bifurcation points but are not detected to be bifurcation points. If no point was found to be close to the manually detected point then that manually detected point is regarded as false negative.

hit Bifurcatio n Detection Accuracy (BDA) = ------- -

TN +FN +FP (3)

Depending on the BDA value obtained using (3), we select which mask is most suitable and at what external threshold value. After plotting the accuracies, it is seen that for external threshold value 6 the combination mask L-R and R-L produced the highest BDA percentage. After locating the branch points, the Euclidian distance between the detected optical disc center and all the branch points are computed. The highest distance is selected and ratio between distances of each branch points to the highest distant point is calculated and all the ratio values are sorted in ascending order. This is a very important aspect of the proposed system. Since we are taking the ratio of the distances this makes our system rotationally invariant which signifies that even if the next acquired image of the same person is rotated to some degree, the ratio of the distances would not vary. Another advantage of this precaution is that this saves us from using image registration and hence saves time.

2.4.2 Mean intensity spread of the optical disc (M)

This is a novel feature concept that has been introduced in our previous work [17]. During the optical disc detection process [12], a candidate point for optical disc center is selected and mean intensity value of the circular regions, centering that candidate points, of various radiuses are computed. These mean intensities for various radiuses are used as a feature point. However, the mean intensity which is used to choose the candidate point is not used as the feature, rather all the mean intensities of radius ranging from Rmin to Rmax, of the selected candidate point are used as the feature and named as 'Mean intensity spread'.

Two separate experiments were performed. 'EXP-l' was done to see the recognition accuracy of the known 18 images of 6 authorized persons and 'EXP-2' to see the accuracy of the total 565 images where the aim was to accept the first 18 images and reject the last 547 images. In both the experiments RGB outperformed YCbCr. In EXP-I, maximum accuracy was found to be 83% for radius range of 13 to 14. Hence, the radius range was decided to be 5 to 30. In EXP-2, optimum accuracy is found to be 88% for radius 47 and 51. Hence it is decided that for complete recognition test the mean intensity values of radius 5 to 51 would be used.

2.4.3 Semi-circular segment around the optical disc (S)

In previous publication [3], we have seen that they used vessels thickness as a feature in the ROI (Region of Interest), area close to the optical disc. The feature that is used here is that by using the optical disc center as the center point, circular segments are extracted from the blood vessel tree. Since this circular segment contains blood vessel parts very close to the OD, it acts as a very good feature.

In Section 2.1 we have shown how to detect the center of the optical disc. Binary circular mask ml of radius Rout is formed whose inner circular region is white owing to 1 and exterior region is black owing to O. A binary circular mask is shown in Figure. 5 (left). Now, we generate another circular mask m2 of radius RIN, where RIN is less than or equals to Rout and is exactly the opposite of mask mi. Figure. 5 (middle) shows the circular mask M2. Now a global mask is made using these two masks. This is achieved in the way that the center of both the masks is aligned and every pixel of M2 that shadows pixels of MI are multiplied and the resulting values are assigned to the corresponding pixels in MI. Global mask is shown in Figure. 5 (right). Since we might face the challenge of meeting with extremely right or left aligned images; only half of the global mask is used in order to deal with this sort of obstacles,. If the optical disc is aligned to the right then the left side of the mask will be used and vice versa. Figure. 6 shows how the mask is applied and the corresponding extracted segment.

1671

Page 5: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

Figure. 5: Global mask creation.

Figure. 6: (left) Extracted semi-circular segments in original image, (right) actual segmented section.

The extracted semi-circular blood vessel segment is a two dimensional array. To be used as a feature, each of all the columns are concatenated at the end of it predecessor column and made into a single column and transposed to make a string. In order to use this circular or semi circular segment as a feature, it is importance that we decide what should be the value of RoUT and RIN. Experimentation was done where the value of RoUT was kept fixed at 50 pixels and RIN value was varied from 10 to 40 pixels, incrementing by a factor of 10 pixels. The intention was to determine the thickness of the segment to be used as the feature. When RIN is 10, the thickness of the segment is 40 pixels (50-10=40). For YCbCr images, for EXP-1, thickness of 20 pixels produced 83.89% and for EXP-2, thickness of 30 pixels produced 89.15% maximum accuracy. For RGB images, for EXP-1, thickness of 30 pixels produced 84.17% and for EXP-2, thickness of 40 pixels produced 87.25% maximum accuracy and hence these configurations are used III the final combinative authentication process.

2.5 Similarity Measure

Similarity measure is the use of various tools to find the degree of similarity between the reference templates and the candidate templates. In our proposed system, we have used 2-D correlation coefficient to find out the degree of similarity between templates. It computes the correlation coefficient between A and B, where A and B are matrices or vectors of

the same size.

where A is mean of A and B is mean of B.

3. Experimental results aud discussiou

Our proposed method had been implemented using MATLAB 7.8.0 on 32-bit Windows XP running on a system with Intel Core-2 DUO 2.4 GHz micro-processor and 2.0 GB of RAM. As for data, we had locally collected 19 images, belonging to 6 individuals. Hence due to the shortage of appropriate data, we selected 18 local images, belonging to 6 individuals, 3 from each subjects and 545 images, each belonging to a single individual, from the MESSIDOR [13] dataset which contained 1200 color retinal fundus images with low to mild pathologies. We experimented with the green channel from RGB and luminance channel from YCbCr color band. At first, we tested our methods on the 18 images to see if they could be separately and accurately identified. We then combined the 18 images with the 545 images and tested to see if our system could differentiate between these 18 and 545 images.

For the accuracies of EXP-1, all 18 images were compared with each other (324 images). Since 3 images belong to each individuals we calculated the accuracy in the way that if an image belonging to a person had a higher degree of matching value with any of the other 2 images of the same person, we assumed that it was a successful recognition. Figure 7 illustrates the performance of individual and combined features in EXP-l. We could see that both for (top) YCbCr and (bottom) RGB, individually semi-circular segment (SCS) feature template was the most successful of the three features in both the color spaces. Features B+M+S produced 94.44% for YCbCr for down-sampling rates of 0.47 to 0.48 and 0.48 to 0.49 and S, S+M and B+M+S produced maximum accuracy of 88.89% for RGB for down-sampling rates of 0.40 to 0.41 to 0.45 to 0.46.

LOO

0.80 �

� 0.60

� 0.40 8 <! 0.20

0.00

400 _410

Dow'n Sampling Rates �.oo

0.80 �

'" 0.60 --+-6 �S ---.-M �B+S �B+M ___ S+M ---+-B+S+M � � 0.40 �

<! 0.20

0.00 400 _4:10

Down SarnplingRates

Figure. 7 Graphs showing comparison of accuracies in EXP-l, for (top) YCbCr and (bottom) RGB color images using multiple features

For the authentication accuracies of EXP-2, each image was compared with all other images (316,969 images). Figure.

1672

Page 6: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

8 shows the recognition accuracies of EXP-2 for (top) YCbCr and (bottom) RGB images. From Figure. 8 (top), for YCbCr color space, the performance of B degraded with the change in the down sampling rates. This might be caused by the fact that as the down sampling rates increased, the number of white pixel decreases and the vessels tends to erode gradually with branches get disconnected, leading to misclassification of points, increasing the false negative rates and thus affecting the accuracy percentages. Among all, B+S+M produced the maximum accuracy of 93.44% for down-sampling rate of 0.43 to 0.44. In Figure. 8 (bottom), for RGB color space, B+S+M produced the maximum accuracy of 90.51 % for down-sampling rate of 0.40 to 0.41. On average EXP-l takes 12.81 seconds and EXP-2 takes 12.92 seconds per image for the completion of the entire procedure.

:1.00

0.90 =-- _ _ .B 0.80 .

t : : :� 0.40

Do ... ..,n S:<unpHngRates

_8 _5 -A-M ---+e-B+S ----,;,-B""M --+-S+M --+-B+S+M 0.90 .." ... -'*' ........ =I=",.-.................... F=I ______ �

460_470 49o_500 52o_530 Down Sampling Rates

580_590

Figure. 8 Graphs showiug comparisou of accuracies iu EXP-2, of (top) YCbCr and (bottom) RGB color images, using multiple features.

4. Conclusions

In this paper we have presented a novel method for

retinal blood vessel based biometric authentication system.

The suitability of Green channel of RGB and Luminance

channel of YCbCr color space are investigated. Two different experiments are done for authentication, namely EXP-l and

EXP-2. In EXP-l, YCbCr and RGB produced 94.44% and 89.88% accuracies respectively. In EXP-2, for YCbCr and

RGB, 93.4% and 90.3% accuracies are achieved respectively.

We can safely conclude that YCbCr performs better than

RGB but by a slim margin. Our system does not require any manual help during the whole procedure. Its limitation is that

it takes a bit more time compared to the work done previously. We would like to overcome this limitation in our

future work and to make the system faster.

References

[1] C. Simon and I. Goldstein, "A New Scientific Method of Identification", New York State Journal of Medicine, Vol. 35, pp. 901 - 906, 1935.

[2] M. Ortega, C. Marino, M.G. Penedo, M. Blanco, F. Gonzalez, "Biometric authentication using digital retinal images", WSEAS International Conference on Applied Computer Science, (2006), pp. 422-427.

[3] H. Farzin, H. Abrishami-Moghaddam, M. Moin, "A Novel Retinal Identification System", EURASIP Journal on Advances in Signal Processing Volume 2008.

[4] M. Ortega, M.G. Penedo, 1 Rouco, N. Barreira, M.l Carreira, "Personal verification based on extraction and characterization of retinal feature points", Journal of Visual Languages and Computing (2009) 80-90.

[5] C. Marino, M. Penedo, M. Carreira. and F. Gonzalez, "Retinal Angiography based authentication", Lecture Notes in Computer Science, vol. 2905, Springer, Berlin, 2003, pp. 306-313.

[6] C. Marino, M.G. Penedo, M. Penas, M.l Carreira, F. Gonzalez, "Personal authentication using digital retinal images", Pattern Analysis and Applications 9 (2006), pp. 21-33.

[7] C. Alonso-Montes, M. Ortega, M.G. Penedo, D.L. Vilarino, "Pixel Parallel Vessel Tree Extraction for a Personal Authentication System", Circuits and Systems, 2008. ISCAS 2008. IEEE International Symposium, pp 1596 - 1599.

[8] L. Latha, M. Pabitha and S. Thangasamy, "A Novel Method for Person Authentication using Retinal Images", International Conference on Innovative Computing Technologies (ICICT), 2010, pp 1 - 6.

[9] S.M. Kabir Raiyan, R. Rahman, M. Habib, M.R. Khan, "Person Identification by Retina Pattern Matching", ICECE, 2004, pp 522-525.

[10] H. Oinonen, H. Forsvik, P. Ruusuvuori, O. Yli-Hruja, V. Voipio, H Huttunen, "Identity verification based on vessel matching from fundus images", Image Processing (ICIP), 17th IEEE International Conference on, September, 2010, pp: 4089-4092.

[11] A.F. Frangi, R.F. Frangi, W.l Niessen, K.L. Vincken, and M.A. Viergever, "Multi scale vessel enhancement filtering," in Image Computing and Computer-Assisted Interventation MICCAI98, pp. 130-137. Springer-Verlag, 1998.

[12] M Islamuddin Ahmed, M. Ashraful Amin, "High speed detection of fundus optical disc", International Conference on Computer and Information Technology (ICCIT), 20ll.

1673

Page 7: [IEEE 2012 International Conference on Machine Learning and Cybernetics (ICMLC) - Xian, Shaanxi, China (2012.07.15-2012.07.17)] 2012 International Conference on Machine Learning and

Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012

[13] MESSIDOR: Digital Retinal Images, http://messidor.crihan.fr/ download. php

[14] M. Ashraful Amin, Hong Yan, "High speed detection of retinal blood vessels in retinal fundus image", Springer-Verlag, 2010.

[15] Peter Kovesi, "Image features from Phase Congruency", Videre- Journal of Computer Vision, the MIT Press, Vol:1 No.3, 1999.

[16] DRIVE: Digital Retinal Image for Vessel Extraction, http://www.isi.uu.nllResearchlDatabaseslD RIVE/

[17] M Islamuddin Ahmed, M Ashraful Amin, Bruce Poon, Hong Yan, "Biometric Authentication using Mean Intensity Spread of Optical disc in Retinal Images", ICITA, pp: 31-35, 2011.

1674