1.IMAGE PROCESSING 2.NEURAL NETWORK

Embed Size (px)

Citation preview

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    1/43

    CHAPTER 1

    INTRODUCTION

    1

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    2/43

    INTRODUCTION

    Fingerprints have been used for over a century and are the most widely used

    form of biometric identification. Fingerprint identification is commonly employed in

    forensic science to support criminal investigations, and in biometric systems such

    as civilian and commercial identification devices. Despite this widespread use of

    fingerprints, there has been little statistical work done on the uniqueness of

    fingerprint minutiae. In particular, the issue of how many minutiae points should

    be used for matching a fingerprint is unresolved. One of the most widely cited

    fingerprint enhancement techniques is the method employed by Hong which is

    based on the convolution of the image with abor filters tuned to the local ridge

    orientation and ridge frequency. !he main stages of this algorithm include

    normali"ation, ridge orientation estimation, ridge frequency estimation and

    filtering. !he fingerprints have been traditionally classified into categories based

    on information in the global patterns of ridges. # fingerprint classification system

    should be invariant to rotation, translation, and elastic distortion of the frictional

    skin. # number of approaches to fingerprint classification have been developed.

    $ome of the earliest approaches did not make use of the rich information in the

    ridge structures and e%clusively depended on the orientation field information.

    For matching of the image we are going to use self organi"ed mapping algorithm.

    In this algorithm calculation of weight vector is done. #ccording to the input

    image feature and the stored database class, weight vectors are ad&usted. !husby ad&usting weight vector according to input image ' store image we will find

    winning output node. In this way training system for proper feature classification

    of image is done. !he $O( is proved to be truly useful for classifying the

    difficult fingerprint classification problem. #lso the classifier has the strength of

    being e%tended by introducing more complicated $O( architectures.

    2

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    3/43

    CHPAPTER 2

    PROJECT DESIGN

    1.IMAGE PROCESSING

    2.NEURAL NETWORK

    3

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    4/43

    IMAGE PROCESSING

    4

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    5/43

    A. IMAGE ENHANCEMENT PROCEDURE

    Block Diagram

    Working of block diagram:

    )* # fingerprint image is taken as i+p which is sensed by fingerprint sensor.

    * On this fingerprint image pre-processing and minutiae e%traction process is

    carried out.

    * !he pre-processing consists of segmentation, normali"ation, orientation,

    and binari"ation.

    /* Feature e%traction consists of minutiae e%traction in the form of bifurcation,

    orientation of ridge.

    0* !hese features are stored as fingerprints database .also the feature

    e%tracted are classified by using self organi"ation mapped using matlab. !he

    classified image ' the database image is compared ' then authentication is

    done.

    5

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    6/43

    A.1 Pre-processing and Minutiae Extraction

    A.1.1 Pre-processing

    1re-processing procedures necessary for minutiae e%traction are shown in

    Fig.#.).)

    Fig A.1.1 pre-processing

    !he first pre-processing procedure is the calculation of the local ridge

    orientation. !he least mean square orientation estimation algorithm is used and

    the local ridge orientation is specified by blocks rather than every pi%el. !he

    calculated orientation is in the range between 2 and 3.

    A.1. !egmentation

    !he first step of the fingerprint enhancement algorithm is image

    segmentation. $egmentation is the process of separating the foreground regions

    in the image from the background regions. !he foreground regions correspond to

    the clear fingerprint area containing the ridges and valleys, which is the area of

    interest. !he background corresponds to the regions outside the borders of the

    fingerprint area, which do not contain any valid fingerprint information. 4hen

    minutiae e%traction algorithms are applied to the background regions of an

    image, it results in the e%traction of noisy and false minutiae. !hus, segmentation

    is employed to discard these background regions, which facilitates the reliable

    e%traction of minutiae. Figure illustrates the results of segmenting a fingerprint

    image based on variance thresholding. !he variance image in Figure 5b* shows

    that the central fingerprint areae%hibits a very high variance value, whereas the

    regions outside this area have a very low variance. Hence, a variance threshold

    6

    INPUT

    FINGERPRINT

    LOCAL RIGE

    ORIENTATION

    I!AGE

    EN"ANCE!ENT# RIGE$EG!ENTATION

    T"INING

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    7/43

    is used to separate the fingerprint foreground area from the background regions.

    !he final segmented image is formed by assigning the regions with a variance

    value below the threshold to a grey-level value of "ero, as shown in Figure5c*.

    !hese results show that the foreground regions segmented by this method

    comprise only of areas containing the fingerprint ridge structures, and that

    regions are not incorrectly segmented.

    A.1." #ormali$ation

    !he ne%t step in the fingerprint enhancement process is image normali"ation.

    6ormali"ation is used to standardi"e the intensity values in an image by ad&usting

    the range of grey-level values so that it lies within a desired range of values. Due

    to imperfection in the fingerprint image capture process such as non-uniform ink

    intensity or non-uniform contact with the fingerprint capture device7 a fingerprint

    image may e%hibit distorted levels of variation in gray level value along the ridgeand valleys. !hus normali"ation is used to reduce the effect of these variations.

    %

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    8/43

    A.1.% &rientation estimation

    Fig A.1.4 T&e orien'('ion o) ( ri*ge pi+e, in ( )ingerprin'

    !he orientation field of a fingerprint image defines the local orientation of the

    ridges contained in the fingerprint 5Fig #../* !he orientation estimation is a

    fundamental step in the enhancement process as the subsequent abor filtering

    stage relies on the local orientation in order to effectively enhance the fingerprint

    image

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    9/43

    .

    A.1.' Binarisation

    (ost minutiae e%traction algorithms operate on binary images where there

    are only two levels of interest8 the black pi%els that represent ridges, and the

    white pi%els that represent valleys. 9inarisation is the process that converts a

    grey level image into a binary image. !his improves the contrast between the

    ridges and valleys in a fingerprint image, and consequently facilitates the

    e%traction of minutiae.

    A.1.( )*inning

    !he final image enhancement step typically performed prior to minutiae

    e%traction is thinning. !hinning is a morphological operation that successively

    erodes away the foreground pi%els until they are one pi%el wide. # standard

    thinning algorithm is employed, which performs the thinning operation using two

    sub iterations. !his algorithm is accessible in (#!:#9 via the ;thin< operation

    under the bimorphs function.

    !he result of applying finali"ation and thinning too fingerprint image without

    any pre-processing stages such as image enhancement. In contrast to the binary

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    10/43

    image is not well connected and contains significant amounts of noise and

    corrupted elements. =onsequently, when thinning is applied to this binary image,

    the results show that the accurate e%traction of minutiae from this image would

    not be possible due to the large number spurious features produced. !hus, it can

    be shown that employing a series of image enhancement stages prior to thinning

    is effective in facilitating the reliable e%traction of minutiae.

    A. Minutiae Extraction

    (inutiae points are the ridge endings or ridge bifurcation branches of the

    finger image.

    1/

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    11/43

    >idge endings --- !ermination of ridge

    >idge bifurcation --- ?unction of two or more ridges

    #fter a thinned fingerprint image is obtained, minutiae are directly e%tracted

    from the thinned image. !o detect minutiae, a count of the pi%el value transition

    at a point of interest in a mask is used. If the count equals , then the point is an

    endpoint. If the count equals @, then the point is a bifurcation. For each e%tracted

    minutia, the % ' y coordinate and the orientation are recorded. !he minutiae

    orientation is defined as the local ridge orientation of the associated ridge. !he

    minutiae orientation is in the range between 2 and

    11

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    12/43

    +lo,c*art of minutiae extraction algorit*m

    )ig A.2 F,o0c&(r' o) in'i(e e+'r(c'ion (,gori'&

    12

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    13/43

    NEURAL NETWORK

    (SELF ORGANISED MAP)

    13

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    14/43

    #eural net,ork

    In our pro&ect we classified the enhanced image by using $O( i.e. self

    organi"ed map which uses neural network.

    # neural n+w is an information processing system that is non alphanumeric,

    non digital ' intensely parallel.

    !he neural n+w require training pattern i.e. they have to be told the desired

    response for a given input ' feedback obtained on their performance until they

    learn the pattern for the n+w.

    Difficulty occur if there if there is no training pattern for the network

    $uch n+w undergoes both self organi"ing ' superimposed learning.

    !hey modified the weight associated with the neural connection based on the

    characteristics of the input pattern.

    14

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    15/43

    SELF ORGANISED MAP

    DE+#A) &+ !&M

    $om is characteri"ed by formation of topographic map of the input pattern in

    which the spatial locations 5i.e. coordinating neuron in lattice* are indicative of

    intrinsic statistical feature contained in the input pattern.

    !he principle goal of som is to transform an incoming signal pattern of arbitrary

    dimension in to a one o two dimensional discrete map and to perform this

    transformation adaptively in a topological ordered fashion.

    !tructure of !&M

    Figure 1

    For matching of the image we are going to use self organi"ed mapping

    algorithm. In this algorithm calculation of weight vector is done. #ccording to the

    input image feature and the stored database class, weight vectors are ad&usted.

    15

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    16/43

    In above e%ample an input vector AB5%),%,%* is given weight vectors

    w),w,w are high, so the input image is of class .!his is done by

    constructing a $O( with si"e m % m and assign random values to all the weights

    corresponding to the m % m map neurons. !hen sequentially train feature

    vectors8

    !otal 6 feature vectors each with 0@ dimensions in si"e are fed into the $O(.

    For simplicity, all the 6 vectors are trained one by one, so 6 trainings going

    through each vector one time are regarded as a run of training. $uch kind of run

    is taken C times.

    In this case the input image has normali"ed component %), % ' %. !heinput vector is compared with each of the weight vector to find the nearest

    distance.

    !he input vector and weight vector are the same when d B 2, i.e. when the

    net weighted input B ).

    !hus the distance d between the input ' each output neuron is calculated '

    the one with minimum distance is declared as winning neuron ' the input image

    is classified in that winning neuron.

    16

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    17/43

    !&M A/&0)M

    weights are initiali"ed by setting them to small random values

    a new input is presented

    the input vector %i5t* is compared with each of the weight vectors to determine

    which is nearest the distance d& between the input and each output neuron &

    where the distance is

    where %i5t* is the input neuron i at time t and wi&5t* is the weight from input

    neuron to output neuron & at time t.

    !he output neuron, &E, with minimum distance is selected.

    !he output neuron &E and neighbors are updated for & 6 &E5t*

    where G5t * is a gain term that decreases with time and 6 & E5t* is the

    neighborhood of the winning neuron at time t.

    !he steps are repeated by presenting a new pattern

    Once the network bias been properly initiali"ed, then essential processesinvolved in the formation of $O( is competition.

    1%

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    18/43

    2&MPE)):

    For each input pattern, the neurons in the network compute their respective

    values of a discriminate function.

    !his discriminate function provides the basic for competition among the neurons.

    !he particular neuron with the largest value of discriminate function is declared

    winning of the competition.

    For this the uclidean distance between the vectors A ' 4i& is calculated.

    !he one with minimum uclidean distance is declare as winning node and that

    image is classified in the winning node class.

    AD3A#)A/E! &+ !&M

    !his system is very fast.

    $imple to compare ecludient distance for matching

    #ccuracy is high

    AD3A#)A/E &+ !&M &3E0 &)E0 ME)&D!

    In $O( weight vectors are calculated if distance between to minutia images is

    obtained according to threshold level it will give opposite as high or low there for

    in this case probability of matching is increases ut in other eth there will be fi%ed

    distance present between two minutia images so that if the distance is not get

    then it will give directly output as no matching.

    1

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    19/43

    CHAPTER

    DATA FLOW DIAGRAM

    1

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    20/43

    1. mage En*ancement

    S!"##e$ i%"ge

    E#&"#!e$ i%"ge

    2/

    Inp' )ingerprin'

    i(ge

    I(ge

    nor(,i('ion

    I(ge

    in(ri('ion

    T&inning o) '&e

    i( e

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    21/43

    . Minutiae extraction

    21

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    22/43

    ".2lassification using !&M

    i#'u

    Ou'u

    22

    Posi'ion o)

    in'i(e +

    $O! ne'0or9

    :ining no*e

    *e'erines '&e

    c,(ss o) '&e i(ge

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    23/43

    CHAPTER

    RE*UIREMENT ANAL+SIS

    23

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    24/43

    P0&4E2) +EA!B)5

    )ec*nological feasibilit6

    the pro&ect is built on the matlab platform so it is hardware independent7 but it

    works only in windows environment.

    Due to this its inherent convert nature , the whole process is safe-n- secure at

    any facility where instruction is possible.

    E#30ME#) 0E780ED

    ard,are re9uirement

    (inimum )(9 >#(50@ (9 recommended*

    (inimum 9 of free hard disk space

    !oft,are re9uirement:(icrosoft 4indow 222+A1

    (#!:#9 .2 by (athworks.

    !&+)WA0E 78A)E!

    2orrectness:

    !he e%tent to which a program satisfies its specification and fulfills the userJs

    main ob&ectives. #s this pro&ect is about classification it provides correctness.

    0eliabilit6:

    24

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    25/43

    !he e%tent to which a program can be e%pected to perform its intended

    function with required precision. !he proposed system provides reliability.

    Efficienc6:

    !he efficiency with which the program uses the computing resources to

    perform its function. !he system is efficient.

    ntegrit6:

    !he e%tent to which the access to software or data by unauthori"ed person

    can be controlled. !his system maintain the integrity of resources . 8sabilit6:

    %tent of effort required to learn, operate, prepare input, interpret output of a

    program. It is software with a simple interface7 user can be this tool with

    simplicity as the KI is user friendly.

    Maintainabilit6:

    ffort required to locate and any error in a program accordingly system is

    quite maintainable.

    )estabilit6:

    ffort required to test a program to ensure that it perform its required function.

    !he system does not requires any effort for testing.

    Portabilit6:

    ffort required transferring the program from one hardware and+or software

    system to another. !he system is hardware independent. It can be

    saved+copied+transferred via the internet7 hence it is portable.

    0eusabilit6:

    %tend to which a program can be reused in another application. !he

    propossd syatem may be a part of larger system, or can be embedded in a

    larger security product.

    25

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    26/43

    CHAPTER ,

    WH+ MATLA-

    26

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    27/43

    WA) ! MA)AB

    (#!:#9 is a high-performance language for technical computing. It

    integrates computation, visuali"ation and programming in an easy to use

    environment where problems and solutions are e%pressed in familiarmathematical notation. It provides environment with hundreds of built in

    function for technical computation. !ypical uses include8

    (ath and computation

    #lgorithm development

    (odeling, simulation and prototyping

    $cientific and ngineering graphics

    #pplication development, including graphical user interface building

    (#!:#9 is an interactive system whose basic data element is an

    array that doesnJt require dimensioning. !his allows you to solve many

    technical computing problems, especially those with matri% and vector

    formulations, in a fraction of time would take to write a program in a scalar

    non interactive language such as =.

    !he name (#!:#9 stands for (#!ric :#9oratory. (#!:#9 has

    evolved over a period of years with input from many users. In university

    environments, it is the standard instructional tool for introductory and

    advanced courses in mathematics, engineering, and science. In industry,

    (#!:#9 is the tool of choice for high-productivity research, development,

    and analysis.

    2%

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    28/43

    (#!:#9 features a family of application-specific solutions called

    toolbo%es. Lery important to most users of (#!:#9, toolbo%es allow you

    to learn and supply speciali"ed technology. !oolbo%es are comprehensive

    collections of (#!:#9 functions 5(-files* that e%tend the (#!:#9

    environment to solve particular classes of problems. #reas in which

    toolbo%es are available include signal processing, control systems, neural

    networks, fu""y logic, wavelets, simulation and many others, schematic

    diagram of (#!:#9Js main features is shown.

    2

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    29/43

    W5 MA)AB

    2

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    30/43

    In our 1ro&ect we desired to concentrate on our logic rather than

    programming. $ince (#!:#9 has many inbuilt functions for reading a

    wave file, computing classification using a $O( we chose (#!:#9.

    =omparison between the different algorithms was also easily possible

    which another part of our pro&ect .

    3/

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    31/43

    CHAPTER /

    E0PERIMENTAL RESULT

    In our pro&ect we have used ' trained four fingerprint images.

    31

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    32/43

    A!E; $U$"IL

    TE(#:I$D I(# 9I6#>I$D I(#

    Fingerprint image after thinning

    32

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    33/43

    $electing the region of interest

    stimating the position of ridge ending in the selected region of interest

    33

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    34/43

    6ow position of the e%tracted ridge ending is

    A B 0/ B P B 2

    !his vector 5A, ,* is given as input to trained $O( network.

    !he result of som network is the winning node in which the vector 5A,, * isclassified.

    1osition of ridge ending of other Fingerprint images.

    I(# 6#( 1O$I!IO6 =:#$$

    A

    amey.bmp 0/ P 2 /

    sushil. bmp )@P )0 .2P//

    te&u. bmp ) )) -).02

    sidd .bmp )@ @ -).2/ )

    nstruction to run soft,are

    34

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    35/43

    ). >un (#!:#9.

    . >un the M1>O?=!.mNfile.

    . =lick on 1O1 K1 (6K.

    /. $elect the image for e%periment.

    0. =lick on the M1>O=DN button.

    @. $elect the region of interest on the appeared image .

    On running our pro&ect in (#!:#9 our system shows following result.

    35

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    36/43

    CHAPTER

    CONCLUSION

    2!

    36

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    37/43

    Q !he e%perimental results have shown that combined with an accurateestimation

    of the orientation and ridge frequency, the Enhancement algorithm

    (NORMALISAION, !INARI"AION # $INNIN%& is a'le to effectiely

    enhance the clarity of the ridge structures )hile reducing noise* )hich

    hel+ the etraction of minutiae +oints-

    Q !he $O( is proved to be truly useful for classifying the difficult fingerprint

    classification problem.

    Q #lso the classifier has the strength of being e%tended by introducing morecomplicated $O( architectures

    3%

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    38/43

    CHAPTER

    REFERENCES

    REFERENCES

    3

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    39/43

    MFingerprint Image nhancement and (inutiae %tractionN by >aymond !hai

    $chool of =omputer $cience and $oftware ngineering, !he Kniversity of

    4estern #ustralia, 22.

    $harath 1ankanti, $alil 1rabhakar, and #nil C. ?ain. On the individuality of

    fingerprints. I !ransactions on 1attern #nalysis and (achine Intelligence,

    vol. /, no. , pp. )2)2R)20, 22.

    #.?ain, :. Hong, $. 1antanki, >. 9olle.N Identity authentication using

    fingerprintsN, In 1roceedings of the I, Lol. 0, 6o. P, pp. )@0-). )PP

    6eural 6etwork R # =omprehensive Foundation R $imon Haykin

    Fu""y :ogic For mbedded $ystem R #hmed Ibrahim

    3

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    40/43

    CHAPTER 3

    APPENDI0

    imread 8 It reads the image.ims*o,:I' *isp,(7 '&e i(ge

    *old off:!he hold function determines whether new graphics ob&ects are

    4/

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    41/43

    added to the graph or replace ob&ects in the graph.

    *old on:retains the current plot and certain a%es properties so thatsubsequen graphing commands add to the e%isting graph.

    0&:!he >OI1osition property specifies the region-of-interest acquisition

    window. !he >OI window defines the actual si"e of the frame logged by the

    toolbo%, measured with respect to the top left corner of an image frame.

    >OI1osition is specified as a )-by-/ element vector.

    alp*a:alpha sets one of three transparency properties, depending on whatarguments you specify with the call to this function.

    Plot:plot5* plots the columns of versus their inde% if is a real number. If

    is comple%, plot5* is equivalent to plot5real5*,imag5**. In all other uses of

    plot, the imaginary component is ignored.

    /cf 8 gcf returns the handle of the current figure. !he current figure is the

    figure window in which graphics commands such as plot, title, and surf drawtheir results. If no figure e%ists, (#!:#9 creates one and returns its handle.

    ou can use the statement get52,

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    42/43

    lengt*:!he statement length5A* is equivalent to ma%5si"e5A** for nonempty

    arrays and 2 for empty arrays. n B length5A* returns the si"e of the longest

    dimension of A. If A is a vector, this is the same as its length.

    )able:!he !able component converts a rectangular cell array into a table

    and inserts the table into the report.

    ound to nearest integer.

    B round5A* rounds the elements of A to the nearest integers. For comple%

    A, the imaginary and real parts are rounded independently.

    B,label::abel connected components in a binary image

    : B bwlabel594,n* returns a matri% :, of the same si"e as 94, containing

    labels for the connected ob&ects in 94. n can have a value of either / or ,

    where / specifies /-connected ob&ects and specifies -connected ob&ects7 if

    the argument is omitted, it defaults to . !he elements of : are integer values

    greater than or equal to 2. !he pi%els labeled 2 are the background. !he

    pi%els labeled ) make up one ob&ect, the pi%els labeled make up a second

    ob&ect, and so on.

    nlfilter81erform general sliding-neighborhood operations.

    42

  • 7/27/2019 1.IMAGE PROCESSING 2.NEURAL NETWORK

    43/43