Assignment Pattern v2

Embed Size (px)

Citation preview

  • 8/10/2019 Assignment Pattern v2

    1/23

    Assignment 2Application of pattern recognition in

    biomedical engineering

    Iris Recognition

    Lecturer

    Dr. L iew Yih Miin

    Group members:

    Foo Yoke Yin KEU 110010

    Lai Eng Seong KEU110012

    Lee Kar Mun KEU110013

    Lim Su Yi KEU110015

    Vivian Koh Ci Ai KEU110047

    Yong Ching Wai KEU110049

  • 8/10/2019 Assignment Pattern v2

    2/23

    1

    Table of Contents

    1.0 Introduction ................................................................................................................... 2

    2.0 Iris Recognition Implementation ................................................................................... 4

    2.1 Combination of Support Vector Machine and Hamming Distance Approach .......... 4

    2.1.1 Technical Implementation ................................................................................. 5

    2.1.2 Results ............................................................................................................... 9

    2.1.3 Discussion ........................................................................................................ 11

    2.1.4 Conclusion ....................................................................................................... 11

    2.2 Neighborhood-Binary Pattern (NBP) Approach ...................................................... 12

    2.2.1 Technical Implementation ............................................................................... 12

    2.2.3 Result ............................................................................................................... 17

    2.2.4 Discussion ........................................................................................................ 17

    2.2.4 Conclusion ....................................................................................................... 17

    3.0 Overall Discussion ........................................................................................................ 18

    4.0 Conclusion ................................................................................................................... 20

    5.0 Future Outlook ............................................................................................................ 20

    6.0 Reference .................................................................................................................... 21

  • 8/10/2019 Assignment Pattern v2

    3/23

    2

    1.0

    Introduction

    With the advent of technology, biometrics authentication is made possible in

    determining human identity. Biometrics authentication is a combination of using both

    the computer system and human biometrics, the physical or behavioral traits such as

    fingerprints, face, palm prints, hand geometry, iris and voice in identification. Among

    the traits, iris has particularly gain the interest of mankind to use it as a recognition

    system due to its rich texture which offers a strong biometric cue for recognizing

    individuals (Ross, 2010).

    Figure 1: The Anatomy of Eye

    The iris is the color portion of the eye which located behind the cornea and in front of

    the lens. It uses the pupillary dilator and sphincter muscles that govern the pupil size

    to control the amount of light enters the eye. The details and pattern of the iris for

    everyone is different due to the way it develops. During prenatal growth, iris develops

    through a process of tight forming and folding of the tissue membrane. Prior to birth,

    degeneration occurs causing the pupil to open along with the formation of the random

    and unique patterns of the iris. Although the correlation and structure of the iris is

    genetically linked, each individuals irises are unique and structurally distinct

    (Westmoreland, Lemp & Snell, 1998).

  • 8/10/2019 Assignment Pattern v2

    4/23

    3

    Therefore, pattern recognition is one important stage in iris identification. Through

    recognizing the patterns on the iris, one individual iris can be distinguished from one

    another. It is impossible for man to recognize the pattern of human iris and identify

    the owner from it. Pattern recognizing system can match observed iris patterns to

    stored patterns, recognizing iris patters through its features and perform structural

    analysis of features to identify their correlations. Hence, a pattern recognition system

    plays a very important role to an extend such that, without this the development of iris

    identification will halt.

    The concept of using iris as a method in recognition is proposed by ophthalmologist

    Frank Burch in 1936. Following in 1985, Drs. Leonard Flom and Aran Safir,

    ophthalmologists, proposed the concept that no two irides are identical, and were

    awarded a patent for the iris identification concept in 1987. Dr. Flom approached Dr.

    John Daugman to develop an algorithm to automate identification of the human iris.

    In 1993, a prototype unit had successfully completed by the Defense Nuclear Agency

    with the help from Drs. Flom, Safir and Daugman. Dr. Daugman was then awarded a

    patent for his automated iris recognition algorithms in 1994 and the first commercial

    products become available in 1995. In 2005, the broad patent covering the basic

    concept of iris recognition expired, providing marketing opportunities for otherparties to develop their own algorithms for iris recognition.

    Iris recognition method have been intensively applied for identification purposes. For

    example, India is using it for Unique ID program, The United Arab Emirates is using

    for border control, so do airports in London Amsterdam and elsewhere to speed up the

    process of identification. Nevertheless, there are still complex challenges faced when

    special conditions appear, such as the wearing of contact lenses, artificial eyes,

    accidental damaged on iris and all sorts of biological conditions related to the iris.

    More development has to done to overcome these flaws (Burge & Bowyer, 2013).

  • 8/10/2019 Assignment Pattern v2

    5/23

    4

    2.0 Iris Recognition Implementation

    2.1 Combination of Support Vector Machine and Hamming

    Distance Approach

    In the old time, only one method will be used for iris recognition. For example, the

    frist algorithm for iris location was first proposed by Daugman in 1993. Then

    researcher had found out that Hamming distance can be used for matching purpose

    and an alternative segmentation method (Wildes, 1997) in which edge detection

    operator and Hough transform were used. The upper and lower eyelids will be

    explicitly modelled with a parabolic arcs. The method in recognizing iris is keep on

    improving and researching by researchers. However, a combination of two

    classification methods (Rai, 2014) were proposed. The zigzag collarette area of iris is

    chosen as feature extraction due to its complex pattern (Roy & Bhattacharya, 2006).

    Then parabola detection technique will be used to detect eyelid and eyelash will be

    removed by using median filter. HAAR wavelet and 1D Log Gabor filter will be used

    for feature extraction. Then support vector machine will be used as main classifier

    followed by Hamming distance. By combining two methods of support vector

    machine and Hamming distance approach, the accuracy on CASIA and Check iris

    database can be increased if compared to single method.

  • 8/10/2019 Assignment Pattern v2

    6/23

    5

    2.1.1 Technical Implementation

    Iris image capture

    Segmented Zigzagcollarette area

    Eyelid detection

    Normalization

    Feature Extraction

    ClassificationEnrolled

    Database

    Result

  • 8/10/2019 Assignment Pattern v2

    7/23

    6

    Step 1: Localization of iris boundary

    Hough transform is a commonly used technique for identifying the parameters of

    simple geometric objects, including lines and circles present in a given image. Hough

    transform calculates the centre and radius coordinates of the iris region (Masek,

    2003). The parametric equations for a circle with radius (r) and centre (xc, yc) are

    stated as following:

    x = xc+ r cos (1)

    y = yc+ r sin (2)

    The points (x, y) give the perimeter of a circle when angle sweeps through the entire

    360. The radius can be set as a constant so that the parametric representation of the

    circle can be simplified. Edge detection technique can be applied before adopting

    circular Hough Transform to find the circles in the given image.

    Hough transform with all edge points (xi , yi) where i = 1, 2,... n can be written as

    following:

    Note: The highest coordinates (xc, yc, r) is selected as the coordinate of centre and

    radius of the circle.

    Step 2: Selection of zigzag collarette area

    The selected zigzag collarette area provides important iris features as most complex

    patterns of iris are captured in this area. Zigzag collarette area is unlikely affected by

    eyelids and eyelashes due to its close position with the pupil.

    Step 3: Eyelid detection

    Upper and lower eyelids detection requires two search regions that are confined

    within the zigzag collarette area of the iris and pupil. The width of search region can

    be determined by following equation:

  • 8/10/2019 Assignment Pattern v2

    8/23

    7

    Width of search region = radius of irisradius of pupil (5)

    The edge image of an eye image can be determined using horizontal edge map due to

    the presence of eye lids in upper and lower horizontal region. Eyelids detection is

    done by adopting parabolic Hough transformation at each edge point within the search

    Step 4: Normalization

    Right after a successful segmentation, transforming the iris region will be done so that

    it has fixed dimensions. Normalization process is done by using rubber sheet model

    devised by (Daugman, 1993). After that, eyelash removal method will be used to

    remove eyelashes and restore the underlying iris pattern as much as possible by

    recreating Zigzag collarette area pixel that occluded by eyelashes. This can be done

    by using the information from their non-occluded neighbors. We need to decide if

    each of the pixel present in normalized image is occluded by eyelash. The equation

    below can be used to determine the occlusion by eyelash.

    If I(x,y) < T, pixel is occluded by eyelash. (6)

    whereIis the intensity of a pixel, Tis threshold.

    If the pixel satisfies equation (6), then a 5 x 5 median filter will be applied on that

    particular pixel. In 5 x 5 neighborhoods, only pixels that are greater than Twill be

    chosen as it is not occluded by eyelash. Lastly, all the pixels will be sorted in

    ascending order and center pixel value will replace with median value of 5 x 5

    window neighborhoods.

    Step 5: Feature Extraction and Matching

    The relevant texture information needs to be extracted after normalization has done.

    Two methods to extract feature are proposed, ie Haar wavelet decomposition and 1D

    Log Gabor wavelet. In the other hand, there are also two classifiers will be used

    which are the support vector machine (SVM), as the main classifier, and Hamming

    distance, as the second classifier. For data analysis in iris region in under multi-

    resolution mode, wavelets have the advantage over the traditional Fourier transform

    as the frequency data is localized in wavelet, allowing feature which occur at the same

    position and resolution to be matched up. HAAR wavelet is applied to the normalized

    image of 64 x 512 at three different levels successively for feature extraction. The

  • 8/10/2019 Assignment Pattern v2

    9/23

    8

    HAAR wavelet transform will be performed repeatedly in order to reduce information

    sizes.

    By convolving the normalized iris region with 1D Log-Gabor wavelet, the feature

    encoding can be implemented. The 2D normalized iris region will first be broken into

    a number of 1D signals and then these signals are convolved with 1D Gabor wavelets.

    In order to prevent the influence of noise in output, the intensity values at known

    noise area in the normalized pattern are set to the average intensity of surrounding

    pixels. The encoding process will produce a noise mask along with bitwise template

    which will be further used for classification purpose. Combined Support Vector

    Machine and Hamming distance based classification approach is applied on each iris

    image for matching purpose. 512 most important features of a given iris are extracted

    using HAAR wavelet. The training and testing of SVM is done using the extracted

    features. Hamming distance is applied for correct classification. If the classification is

    done by SVM, n of SVM models will be developed for n classes present in the

    training phase. When the training phase is finished, the given iris image will be tested

    against all n SVM models for iris recognition. The evaluation of SVM performance is

    determined using false acceptance rate (FAR) and false rejection rate (FRR). FAR is

    defined as the probability of identifying an outsider as an enrolled user. On the otherhand, FRR is the probability of rejecting an enrolled user (wrongly recognizing

    enrolled user as outsider).

    Hamming distance is applied if the given iris is falsely rejected. 1D Log Gabor filter

    is adopted to extract 2048 bit feature before applying Hamming distance for

    classification. Hamming distance is calculated as mean difference between the given

    iris and training templates for each class. For instance, if there are 5 training templates

    in a particular class, then hamming distance S is calculated by following equation:

    Si=

    for class i = 1, 2, , n

    User authenticity is determined by comparing the mean Hamming distance with a

    threshold value.

    True, if S

  • 8/10/2019 Assignment Pattern v2

    10/23

    9

    2.1.2 Results

    Figure 2: Selection of zigzag collarette area

    Figure 3: Eyelids detection

    Figure 4: Normalization process

    Figure 5: Eyelashes removal

  • 8/10/2019 Assignment Pattern v2

    11/23

    10

    Figure 6: Matching process

    Table 1

    Table 2

  • 8/10/2019 Assignment Pattern v2

    12/23

    11

    2.1.3 Discussion

    There are several reasons of applying two different techniques for feature extraction

    and classification. The reasons are as following:

    1. Information about the noise mask could not be obtained by Haar decomposed

    feature vector. This information plays a significant role in Hamming distance

    classification. Thus, Haar is not suitable for Hamming distance classifier but

    SVM classification.

    2.

    The feature vector obtained by using 1D Gabor wavelet is suitable for

    Hamming distance but not for SVM.

    3.

    According to extensive testing of different networks, combination of SVM andHamming distance approach will give a better recognition accuracy than using

    a single method. As SVM alone has relatively high false rejection rate,

    Hamming distance is added to further overcome this problem, resulting in

    higher recognition rate.

    Based on the Table which demonstrate the accuracy comparison between the

    proposed method and previous reported approaches, combined SVM and Hamming

    distance method achieved 99.91% of accuracy, which ranks the highest among all the

    approaches.

    2.1.4 Conclusion

    An efficient approach for iris feature extraction and recognition can be done by using

    the method as discussed above. Zigzag collarette area of iris is chosen for the feature

    extraction due to its ability to capture the most significant area from the complex

    pattern of iris and hence a higher recognition rate can be achieved. HAAR wavelet

    and 1D Log Gabor filter are used to extract feature which will then being used in iris

    identification using the combined support vector machine and Hamming distance

    approach. Parabola detection and trimmed median filter are also been used to detect

    eyelid and eyelash. Comparisons among previous reported approaches and the one

    currently proposed were made. It indicates that the recognition accuracy is higher

    when using combination of SVM and Hamming distance if compared to either SVM

    or Hamming distance. Accuracy in terms of FAR and FRR for this method is

  • 8/10/2019 Assignment Pattern v2

    13/23

    12

    exceedingly high for the CASIA database. In short, this proposed approach not only

    efficient in case of identification but also verification.

    2.2 Neighborhood-Binary Pattern (NBP) Approach

    A novel new feature extraction method known as Neighborhood-Binary Pattern

    (NBP) is being proposed (Hamouchene & Aouat, 2014). This method is inspired by

    Local Binary Pattern (LBP) method as NBP method are able to capture local

    information and in the same time iris texture can be describe better.

    2.2.1 Technical Implementation

    There are few steps in iris recognition systems. The steps are image acquisition,

    iris preprocessing, feature extraction and also the matching steps as shown in figure

    below (Hamouchene & Aouat, 2014).

    To obtain image from a person

    To using sensor

    Image Acquisition

    To remove useless information from iris image

    To extract the region of interest (iris)

    To include segmentation (isolate iris ring) and normalization (provideinavariant iris area, form ROI into rectangular region)*

    Preprocessing

    There are two approaches:

    Local Binary Pattern (LBP)

    Neighborhood Binary Pattern (NBP)

    Feature Extraction

    distasnce measure between generated iris code and stored iris code are beingcalculated**

    Matching step

  • 8/10/2019 Assignment Pattern v2

    14/23

    13

    Note *Due to the two ring of the iris are not co-centric, Integro-differential

    operator by Daugman is being used to detect the inner and outer boundaries

    (Daugman, 2004).

    **Daugman using Hamming distance and threshold around 0.34

    Figure 7: Typical process of iris recognition

    Figure 8: Conversion of iris image to iris code

    LBP method which proposed by Ojala and Pietikainen is using analysis window

    with the size of 3x3. This method is basically comparing each neighborhood pixels

    with values of the central pixel and following conditions are given where if

    neighborhood pixel value is above or equal to central pixel value, then it is encoded

    with value of 1, otherwise the neighborhood pixel value is encoded with 0. After

    being threshold by the central pixel values, a binary code can be obtained and this

    binary code will be converted to decimal number.

    Feature Extraction by using Local Binary Pattern (LBP)

    The aim of LBP is to extract iris feature out from the normalized iris images.

    The output from LBP is feature vectors with n x n dimension that serves as input for

    LVQ classifier.

  • 8/10/2019 Assignment Pattern v2

    15/23

    14

    Summary of LBP computation:

    Figure 9: Computation of LBP

    There is another feature that can be computed by using LBP method. It is known as C

    contrast. C contrast is computed as the difference of average pixel of threshold equal

    to 1 and the average pixel of threshold equal to 0.

    Figure 10: C-contrast computation method

    The pixel values in the LBP is summed and the central pixel is replaced with thisweighted sum value.

    The weight pixels are matched with the threshold. If the threshold 1 is matched with theweight pixel, then the pixel will be the weight pixel. If the threshold 0 is matched with theweight, then 0 is displayed. The new set of data formed is the LBP data.

    Every neighboring pixel is compared with the central pixel. If the neighbor pixel isgreater than the central pixel, then it is recorded as 1. If the neighbor pixel is less than thecentral pixel, then it is recorded as 0.

    A sub-image with 3x3 matrix is formed.

  • 8/10/2019 Assignment Pattern v2

    16/23

    15

    Feature Extraction by using Neighborhood Binary Pattern (NBP)

    For NBP method, the neighborhood pixel values are being threshold as in LBP

    but the difference is NBP method is comparing the neighborhood pixel values with

    their respective next neighbor instead of central pixel values. The conditions are about

    the same where a value of 1 is given if its gray value (neighborhood pixel value) is

    greater than the next neighbor otherwise value of 0 is given.

    Figure 11: Extraction of NBP pattern

    By picking one the pixel value in the 3x3 analysis window (except for central

    pixel value), the value is start comparing with adjacent values to determine 0 or 1.

    The binary code is further converted to decimal value. If a small rotation happened to

    the analysis window, different NBP code (binary code) will be obtained.

    In order to prevent the rotation problem, an encoding process is being proposed.

    This encoding process with pick the largest neighborhood pixel value and start

    compare it with adjacent neighborhood pixel value. This encoding process can result

    in the same binary code even though rotation on the analysis is happened.

    Figure 12: Rotation invariant for NBP method.

    A way to describe the NBP image is by using decomposing architecture. This

    method will firstly divide the image into several blocks where mean value for each

    block is being calculated and its variations will be encoded. Same as the conditionused in NBP, if value of one block is bigger than its neighbor block, a value of 1 will

  • 8/10/2019 Assignment Pattern v2

    17/23

    16

    be given and 0 otherwise. Therefore, we will obtain a binary matrix of the variation

    means and we can use it as the template of the iris texture.

    Figure 13: Process of encoding mean variation

    Intersection method is being used for the matching purpose by comparing the iris

    images. Similarity distances between the two extracted matrices are being calculated

    by performing below equation:

    As show in the above equation, M1 and M2 are the variation binary codes for the

    iris images. S value for the ith block is equal to 1 if the value of M1 for ith block is

    equal to the value of M2 for ith block. Nb is representing the total number of blocks

    and this value is based on the degree of decomposition of the iris image. If value of

    Dis is above certain threshold, the two iris images (1 and 2) are referred as same

    person.

    Public iris database, CASIA is being used to evaluate the performance of the

    system. As suggest by (Hamouchene & Aouat, 2014), three images from each person

    are taken as reference and 80 images will be used as test images where each image are

    referred as query. For each of the image, LBP histogram and mean variation of the

    NBP image are being extracted. Between the querys feature and extracted features,

    hamming distance is being calculated. By sorting the hamming distance from most

    similar to less similar, the top three is being considered and the query iris is classified

    by referring the majority (highest similarity).

  • 8/10/2019 Assignment Pattern v2

    18/23

    17

    Figure 14: Recognition process flow

    2.2.3 Result

    Figure 15: Recognition rate for LBP and NBP method for each person

    2.2.4 Discussion

    From the above graph (figure 15), we can see that NBP method is way better than

    the LBP method as the LBPs global rate is only58.75% where NBPs rate is 76.25%.

    This is because of NBP method is comparing the neighborhood pixel values with its

    adjacent pixel values instead of being threshold by the central pixel values. In other

    words, we can say there is relationship among the neighborhood pixel values for NBP

    method. This result had shown the robustness and efficiency of the NBP method as

    compared with LBP method.

    2.2.4 Conclusion

    We can conclude that NBP method is having good performance as compared with

    LBP. This is because of there is relative connection between the neighborhood pixels

    as each one of them is being thresholded by the adjacent neighbor and encoded. Not

  • 8/10/2019 Assignment Pattern v2

    19/23

    18

    only that, NBP image is being decomposed into number of blocks where their

    variation of mean values are being extracted and encoded. This will result the binary

    matrix being used as the feature descriptor for the iris image.

    3.0

    Overall Discussion

    There are many approaches for the iris recognition. However, a classical iris

    recognition system involved the series common steps which are the image acquisition,

    iris preprocessing, feature extraction and matching step. For the preprocessing stage,

    it involves the segmentation, filter and normalization.

    From the above two methods for the iris recognition, they go through the common

    series steps but each method is using different approach for the feature extraction and

    matching step. The table below showed the summary for comparison between the

    methods.

    Table 3: The comparison between method 1 and method 2 for the iris recognition

    Iris Recognition Method

    Method 1

    Combination of m and Hamming

    Distance Approach

    Method 2

    Neighborhood-Binary Pattern (NBP)

    Approach

    Segmentation

    Zig-Zag Collarette

    (The irregular jagged line

    between regions of pupillary zone

    and ciliary zone in the surface of

    iris )

    Segmentation

    Iris region

    Feature Extraction

    a. Haar wavelet decomposition

    b. 1D Log Gabor wavelet.

    Feature Extraction

    a. Neighborhood Binary Pattern (NBP)

  • 8/10/2019 Assignment Pattern v2

    20/23

    19

    Matching

    a.

    Support Vector Machine

    b. Hamming Distance

    Matching

    a. Hamming Distance

    Advantage

    - The segmentation region is more

    specific on the region of Zig-Zag

    Collarette where it captures the

    most important areas of iris

    -

    There are two classifiers which

    increase the robustness of the iris

    recognition.

    -

    Low false acceptance ratio

    - High false rejection ratio

    Disadvantage

    - The data will be affected by the

    rotation of the iris.

    Advantage

    - Rotational invariance

    Disadvantage

    -

    With Hamming distance classifier

    alone, the accuracy is not robust.

    From the two methods above, they applied the pattern recognition approaches in

    different way to identify the iris pattern. Feature extraction is considered as the most

    importance stage for the pattern recognition because it extracts the relatively most

    significant information from a given data, in this case is the iris pattern. Method 1

    extracts the feature information through the wavelet. Wavelet is a useful tool with rich

    mathematical contents and great applications. One of its applications is the image

    analysis for pattern recognition. It is a function that is isolated with respect to time

    and frequency, while the Fourier transform only respect to the frequency information.

    In method 1, one of the wavelet transform used is the efficiently computable Haar

    wavelet transform. The iris image is mapped from the space of pixel to that of Haar

    wavelet feature that contains rich description of the pattern. On the other hand,

    method 2 applies the Neighborhood-Binary Pattern (NBP) in the feature extraction.

  • 8/10/2019 Assignment Pattern v2

    21/23

    20

    Generally, NBP is based on the Local Binary Pattern (LBP) which is an effective

    texture operator which labels the pixel of an image by thresholding the neighborhood

    of each pixel and considers the result as binary number. Thus, two different

    approaches in the feature extract may result in different results for the recognition

    rate, each approach with its own strength and weakness as shown in the table 3.

    After the feature extraction, classifier is applied to classify or identify the iris pattern

    for the identification. The method 1 uses extra one classifier known as Support Vector

    Machine which is a supervised learning models work with an associated algorithm in

    the pattern recognition. With the additional classifier, the error occurred is reduced

    compare to method 2.

    The results from both methods showed that method 1 is more stable and robust which

    was around 99.91% of the average recognition rate, whereas LBP was 58.75% and

    NBPs rate was 76.25%. However, it does not mean the method is better because

    those methods are carried out in the different designed experiments.

    4.0 Conclusion

    In conclusion, the approach toward for iris recognition is different for both methods

    and each approach with its own advantage and disadvantage respectively. However,

    researches can be carried out for novel iris recognition with the combination of both

    methods. Wavelet transform is used to decompose a given image to different kinds of

    images and then processed with local difference between image pixel and its

    neighborhood to build the LBP or NBP. The combination may increase the robustness

    for the iris recognition.

    5.0 Future Outlook

    The future works involve are resolving challenges faced in developing an ultimately

    high accuracy and reliable iris recognition system. Besides than solving the issues

    caused by special iris condition as mentioned in introduction, noise that caused by

    environments factors such as unfavourable lighting, large stand-off distances and

    moving objects (Ross, 2010). These conditions can caused nonideal irides images to

  • 8/10/2019 Assignment Pattern v2

    22/23

    21

    be captured and increase the difficulties in processing. Robust image restoration

    schemes are needed to enhance the quality of such iris images. In the intervening

    time, researchers might develop a robust ocular multi-biometrics system by

    combining the iris with facial biometrics. A more accurate and improved matching

    identification can be done compare to only using low resolution iris image.

    Apart from that, the hardware architecture of the system has to be improved to

    complement with the increasingly complex algorithm which is used to enhance the

    reliability and functionality of current used solutions (Grabowski & Andrzej, 2011).

    The uses of iris identification system in large scale has raised the concerns about the

    iris template security and the retention of owners privacy. A centralised databased

    which stored millions of iris template and data must be secure with extra security to

    prevent thief and also backup system to avoid data lost. This security system needed

    to be updated and monitor frequently as the digital thief will also improves their

    techniques from time to time.

    6.0 Reference

    Ali, H., & Salami, M. J. (2008). Iris recognition system by using support vector

    machines. In International Conference on computer and communication

    engineering(pp. 516-521).

    Burge, M. J., & Bowyer, K. W. (2013).Handbook of iris recognition. Springer.

    CASIA iris image database (v1.0), The National Laboratory of Pattern Recognition

    (NLPR), Institute of Automation, Chinese Academy of Sciences (CAS), 2006

    Chen, C., & Chu, C, (2005). High performance iris recognition based on 1-D circular

    feature extraction and PSO-PNN classifier. Expert Systems with Applications,

    36(7), 10351-10356.

    Daugman, J. G. (1993). High confidence visual recognition of persons by a test of

    statistical independence.Pattern Analysis and Machine Intelligence, IEEE

    Transactions on, 15(11), 1148-1161.

  • 8/10/2019 Assignment Pattern v2

    23/23

    22

    Daugman, J. (2004). How iris recognition works. Circuits and Systems for Video

    Technology, IEEE Transactions on, 14(1), 21-30.

    Grabowski, K., & Napieralski, A. (2011). Hardware architecture optimized for iris

    recognition. Circuits and Systems for Video Technology, IEEE Transactions

    on, 21(9), 1293-1303.

    Hamouchene, I., & Aouat, S. (2014). A New Texture Analysis Approach for Iris

    Recognition.AASRI Procedia, 9, 2-7.

    Masek, L. (2003).Recognition of human iris patterns for biometric

    identification(Doctoral dissertation, Masters thesis, University of Western

    Australia).

    Patil, C. M., & Patilkulkarani, S. (2009, October). An approach of iris feature

    extraction for personal identification. InAdvances in Recent Technologies in

    Communication and Computing, 2009. ARTCom'09. International Conference

    on(pp. 796-799). IEEE.

    Rai, H., & Yadav, A. (2014). Iris recognition using combined support vector machine

    and Hamming distance approach.Expert Systems with Applications,41(2),

    588-593.

    Rashad, M. Z., Shams, M. Y., Nomir, O., & El-Awady, R. M. (2011). Iris recognition

    based on LBP and combined LVQ classifier.International Journal of

    Computer Science & Information Technology (IJCSIT) Vol, 3.

    Ross, A. (2010). Iris recognition: The path forward. Computer, 43(2), 30-35.

    Westmoreland, B. F., Lemp, M. A., & Snell, R. S. (1998). Clinical Anatomy of the

    Eye.Oxford: Blackwell Science Inc.