54

A Review of Dynamic Handwritten Signature Verification

Embed Size (px)

Citation preview

A Review of DynamicHandwritten Signature Veri�cationGopal Gupta and Alan McCabeDepartment of Computer ScienceJames Cook UniversityTownsville, Qld 4811, AustraliaSeptember 1997AbstractThere is considerable interest in authentication based on handwritten sig-nature veri�cation (HSV) because HSV is superior to many other biometricauthentication techniques e.g. �nger prints or retinal patterns, which arereliable but much more intrusive and expensive.This paper presents a review of dynamic HSV techniques that have beenreported in the literature. The paper also discusses possible applications ofHSV, lists some commercial products that are available and suggests someareas for future research. 1

1 IntroductionOur society is increasingly dependent on electronic storage and transmissionof information and this has created a need for electronically verifying a per-son's identity. Handwritten signatures have been the normal and customaryway for identity veri�cation.Although there have been occasional disputes about the authorship ofhandwritten signatures (Osborn, 1929; Harrison, 1958; Hilton, 1956), ver-i�cation of handwritten signatures has not been a major problem, since itappears that humans are generally very good at verifying genuine signatures.Miller (1994) and Sherman (1992) discuss the importance of handwrit-ten signature veri�cation (HSV), also called signature dynamics, and notethe availability of commercial products that use HSV (both note a prod-uct called Sign-On). Miller claims that more than 100 patents have beengranted in the �eld of computer veri�cation of handwritten signatures, manyof these however are for hardware to capture the signature. This interest inHSV is at least partly due to the fact that HSV is superior to many otherbiometric authentication techniques e.g. �nger prints or retinal patterns,which are reliable but much more intrusive and expensive and therefore notas acceptable except perhaps in highly security sensitive situations wherereliability is of utmost importance. Miller and Sherman both note that al-though HSV is likely to become very important in the future, the techniquewill be widely accepted only if it is more reliable than the products thatare currently on the market, in particular the technique needs to have lowerfalse rejection rates (FRR or Type I Error).It should be noted that the aims of authentication are going to be di�er-ent for di�erent types of applications. For example, the primary concern ofveri�cation in a credit card environment (where the card holder presents acard to make a purchase and signs on an electronic device that automaticallyveri�es the signature) must be to have zero or near zero false rejection rateso the genuine customer is not annoyed by unnecessary rejections. In thisenvironment fast veri�cation is essential and, in addition, the informationrequired for HSV should not require too much storage since it may need tobe stored on a credit card strip or a smart card memory. A high level of secu-rity against forgeries may not be required and a false acceptance rate (FARor Type II Error) of 10% or even 20% might be acceptable since even thatis likely to assist in reducing credit card fraud as that would be much betterthan the minimal checking that is done currently. On the other hand, in asecurity sensitive environment that was, for example, using HSV for grant-ing an authenticated user access to sensitive information or other valuable2

resources, it would be necessary to have a high level of security against in-truders and a zero or near zero FAR. A FRR of 10% or higher would be anuisance but might be acceptable. Of course an ideal HSV system shouldhave both the FRR and the FAR close to zero but no technique of HSVpresently appears capable of performing consistently at this high level. Itshould be noted that FRR and FAR are closely related and an attempt toreduce one invariably increases the other.It should be noted that a technique which promises a small FRR whentested on an entire database does not guarantee a small FRR for each in-dividual. The performance �gures reported in the literature are normallyaggregate �gures and it is not uncommon to �nd some individuals that havemuch larger error rates than the rest of the population in the test database.Of course, it is desirable that a technique not only have good aggregateperformance but also good individual performance.Most early work in automatic HSV, in the early 1970's or before, focusedon static (or o�-line) HSV. Static HSV systems are those that require onlyan image of the signature. The advantage of these systems is that they donot need specialised hardware to capture signature information at the pointof signing. Static HSV also has important areas of application, for example,in automatic cheque clearing, however there are disadvantages to the staticapproach. For example, static signature information is unlikely to be usefulfor storage on a credit card or smart card since generally, signi�cant storageis required to store a signature image. Also static techniques don't takeadvantage of the signature dynamics. Thus the results generally aren't asgood as for dynamic techniques. In this paper the focus is on dynamic (oron-line) HSV although some static HSV techniques are also discussed. Fora more detailed discussion of static HSV see Plamondon and Lorette (1989)and Leclerc and Plamondon (1994).Early dynamic HSV was based on using specially instrumented penssince no suitable equipment for capturing signatures was available. Thishas changed in the last several years as graphics tablets have come intowidespread use, tablets that can capture a signature as samples of coordinatepairs, some also capture pen pressure and pen tilt, 100 to 200 times a second.With data available from such equipment, it is straightforward to computevelocities and accelerations. A typical modern on-line HSV system thereforeis likely to be based on a dynamic technique which uses a signature inputdevice like a graphics tablet.The present paper reviews dynamic HSV techniques that have so farbeen proposed in the literature. An attempt is made to describe importanttechniques and assess their performance based on published literature. The3

paper is organised as follows. Section 2 �rst discusses how HSV may beuseful and gives examples of some commercial products already available.Section 3 discusses how human signature experts verify handwritten signa-tures. This is then followed in Section 4 by a discussion of basic methodologyof computer HSV and a discussion of signature writing dynamics. Section5 reviews some of the existing literature in the �eld and is split into threesubsections: point-to-point comparison, feature values comparison, and cap-turing shape dynamically. Section 6 explains why it is so di�cult to directlycompare HSV systems and concludes the paper.2 Applications of Signature Veri�cationIn this section it is brie y discussed how HSV might be used in applicationsthat require user authentication.Credit cards deal with very large amounts of funds each day. Perhaps asmuch as $5-$10 billion worth of purchases are charged to credit cards everyday. It has been reported that credit card issuers lost $800 million int 1989,about $1 billion in 1990 and about $1.6 billion in 1991. Present credit cardfraud in the United States alone has been reported to be well over $2 billionper year. Although these sums are substantial, they are quite small whenexpressed as percentages of the total credit card purchases, certainly wellbelow 1%. Credit card issuers however get only a small commission of thepurchase price and therefore the total losses due to credit card fraud aresigni�cant for the credit card issuers.It has been reported that half of all credit card fraud involves lost orstolen cards while the rest involves counterfeit cards, the nonreceipt of cards,or fraudulent credit card applications. A number of techniques have beenused in an attempt to curb this fraud, some based on identifying suddensurges of spending, but it appears none of these techniques have been par-ticularly successful. Most banks therefore do little to control credit cardfraud.A reliable HSV technique could have applications in reducing credit cardfraud although there appears to be some hurdles that must be overcome ifthe technology is to be useful. A stolen card has the owner's signature onthe back and this makes it particularly easy to forge the signature givenminimal checking of signatures at the place of purchase. This problem offorging the signature does not even arise if a credit card has been interceptedin mail since any signature could be used and put on the back of the card.One possible approach to reducing credit card fraud would be to require the4

owner of a new credit card to visit the bank and supply sample signaturesso that information from the signatures could be put electronically on thecard making it unnecessary to have the owner's signature on the card at all,a signature that may be viewed and forged by a person who has stolen thecard.The above scheme of course has a problem. If one credit card issuerrequires that a person that is being issued a new card must go to the bankand produce identi�cation and give sample signatures while other creditcard issuers continue with the present procedure of requiring no such visitto the bank, the credit card issuer requiring signatures would not have manycustomers! The suggested approach is therefore not particularly convenientfor the customer and is unlikely to be adopted in a very competitive marketplace where the customers are being bombarded with o�ers of new creditcards almost every day.Another approach might be possible. When the owner of a new cardmakes the �rst purchase using the card, the check-out sta� be asked (bythe credit card terminal) to check the customers identi�cation (e.g. driverslicense) and provide the identi�cation number, as is done when cashing acheque. This scheme does have some merit in that it only creates a minornuisance when the customer is asked the �rst time for an identi�cation butusing this approach the HSV technique may not work very well since thesystem has only one signature to base its reference signature on. Furthersignatures will of course become available as the customer makes more pur-chases but then it is possible that those signatures are those of the genuinecustomer and not a forger.Yet another approach might be possible. In this approach, the customeruses the credit card as he or she would normally but his or her signaturesare captured electronically and compared with a signature pro�le that hasbeen built over the last few weeks or months. When the result of comparisonshows a signi�cant mismatch a suitable action is taken which may includeeither rejecting the purchase being charged to the card or, preferably, bring-ing the mismatch information to the attention of a human operator who cantake appropriate action e.g. contact the card owner.HSV might also have uses in computer user authentication if a HSVtechnique could be designed that provided a high level of security againstintruders through a zero or near zero FAR. It might be suitable for userauthentication not only at login time but also for accessing sensitive applica-tions e.g. sensitive databases or exclusive software. A typical dynamic HSVsystem will of course require that a signature input device like a graphicstablet be connected to each workstation to capture signature details. This5

technique has the potential to replace the password mechanisms for access-ing computer systems in some situations. The major disadvantage of thisapproach of course is the requirement that a graphics tablet be attached toeach workstation.Reliable HSV could well have other applications. For example, it mightassist in reducing the forging of passports. An application for a passportnormally requires that the applicant go to some authorized o�ce to �le anapplication form and signatures be certi�ed in the presence of an authorizedo�cer. It is thus not unreasonable that the o�ce where a passport applica-tion is �led may require the applicant to provide a set of sample signatureswhich are captured electronically and used for building a reference signa-ture. That reference signature could then be placed on a magnetic strip onthe passport, moreover passports in the future are likely to have magneticstrips for faster processing anyway. At the port of entry, at the immigrationcounter, the person entering the country is then required to sign his or hername on a graphics tablet and the signature is compared with the referencesignature on the passport strip. Forging of passports will then be almostimpossible.A number of commercial products in HSV are already being advertisedand some are listed here. The list is not comprehensive since growth inthis �eld is quite fast and there is no simple mechanism to �nd a list ofall products in the �eld. A product called PenOp is being marketed byPeripheral Vision of New York and it is claimed that the software maybe used in con�guring systems so that users must login using handwrittensignatures. Another product called Sign-On, it is claimed, allows HSV tobe built-in to a variety of widely used software enabling the system to use ahandwritten signature instead of a password. It uses, besides the signatureimage, acceleration, stroke angles, start and stop pressures (if available)and other factors. The signature information can be updated each time asuccessful veri�cation occurs. The product uses six signatures plus a �nalverifying signature to build a reference signature. The test signature is nowcompared with the reference signature resulting in one of three judgements:true, forgery or ambiguous. The product is claimed to have a 2.5% FRRand a 2.5% FAR but details of performance evaluation are not available.Signer Con�dence is the name of a static HSV system that is marketedfor HSV on cheques. It brings two images, one from the cheque and theother stored in the database, on the screen for comparison and veri�cation.Yet another product is Cadix ID-007 which is claimed to be suitable foruser authentication and which requires a pressure-sensitive pen and tabletfor HSV. The Microsoft Windows based software examines the test signature6

according to three di�erent criteria: the shape of the signature, the speedat which it was written and the pressure of the pen stroke. Veri�cation ofa signature with the ID-007 system typically takes less that one second. Nodetails of how it performs are available.Countermatch is the name of a HSV product from AEA Technologyin the UK. The product uses three sample signatures to build a referencesignature. It is claimed that the product is suitable for signatures writtenin any language but no details of techniques used are provided. AnotherUK company, British Technology Group, markets a product called Kappa.Kappa uses signature shape as well as the timing and rhythm of the signatureand claims to use a new high accuracy pattern matching algorithm developedat the University of Kent, but no details were available. It uses a user speci�cfeature set designed for low FRR, but it is not clear how this set is selected.It also provides a `shape-only' option that allows paper records of signatureto be computerized. The Kappa system has been tested in a public trialat a sub-Post O�ce where some 8500 signatures were collected. A FRR of1.8% with one test signature and 0.85% with three test signatures has beenreported for individuals that were able to provide a satisfactory enrolmentmodel, that is, people who have a signature which does not require \specialmeasure" for veri�cation. The system identi�es at enrolment time thosepeople that are believed to require special measures for veri�cation. It isnot known how many individuals were rejected at enrolment time.The company Silanis Technology Inc. has also released its system calledApproveIT, which runs on WordPerfect 6.0 (Computing Canada, 1995). Sig-natures are added to documents directly from a pen-based input or from apreviously captured signature on a �le which is safe from tampering due toa password protection feature. If a document is signed using ApproveIT, itis veri�ed to ensure the contents have remained unchanged since the priorapproval. If a document is modi�ed after approval, the signature will notprint or display itself on the altered document.3 Handwritten Signature Veri�cationHandwritten signatures come in many di�erent forms and there is a greatdeal of variability even in signatures of people that use the same language.Some people simply write their name while others may have signaturesthat are only vaguely related to their name and, as Brault and Plamon-don (1993a) note, some signatures may be quite complex while others aresimple and appear as if they may be forged easily. It is also interesting to7

note that the signature style of individuals relates to the environment inwhich the individual developed their signature. For example, people in theUnited States tend to use their names as their signature whereas Europeanstend away from directly using their names. Systems which rely directly onthe American style of signing, such as Nagel and Rosenfeld (1977) may notperform as well when using signatures of Europeans, or signatures writtenin di�erent languages.It is well known that no two genuine signatures of a person are preciselythe same and some signature experts note that if two signatures of the sameperson written on paper were identical they could be considered forgery bytracing. Successive signatures by the same person will di�er, both globallyand locally and may also di�er in scale and orientation. In spite of thesevariations, it has been suggested that human experts are very good in iden-tifying forgeries but perhaps not so good in verifying genuine signatures.For example, Herbst and Liu (1977) cite references indicating that as highas 25% genuine signatures were either rejected or classi�ed as no-opinion bytrained document examiners while no forgeries were accepted. Untrainedpersonnel were found to accept up to 50% forgeries.Osborn (1929) notes that handwriting shows great variation in speed andmuscular dexterity. He claims that forgeries vary in perfection all the wayfrom the clumsy e�ort which anyone can see is spurious, up to the �nishedwork of the adept which no one can detect. Experience shows that thework of the forger is not usually well done and in many cases is very clumsyindeed. Osborn notes that the process of forging a signature or simulatinganother person's writing, if it is to be successful, involves a double processrequiring the forger to not only copy the features of the writing imitatedbut must also hiding the writer's own personal writing characteristics. Ifthe writing is free and rapid it will almost certainly show, when carefullyanalyzed, many of the characteristics of the natural writing of the writer nomatter what disguise may have been employed.Osborn notes that unusual conditions under which signatures are writtenmay a�ect the signature. For example, hastily written, careless signatures,like those written in a delivery person's books, cannot always be used unlessone has sample signatures that have been written under similar conditions.Furthermore, Osborn notes that signatures written with a strange pen andin an unaccustomed place are likely to be di�erent than the normal signa-tures of an individual. When a signature is being written to be used forcomparison this can also produce a self-conscious, unnatural signature.Osborn further notes that the variations in handwriting are themselveshabitual and this is clearly shown in any collection of genuine signatures pro-8

duced at di�erent times and under a great variety of conditions, which whencarefully examined show running through them a marked, unmistakable in-dividuality even in the manner in which the signatures vary as comparedwith one another.To investigate signatures, Osborn recommends that several genuine sig-natures should always be obtained, if possible, and �ve signatures alwaysprovide a more satisfactory basis for an opinion than one and ten beingbetter than �ve. To detect forgeries, Osborn gives a list of about 50 pointsthat one needs to consider including the following:� Is the signature in a natural position?� Does the signature touch other writing and was the signature writtenlast?� Is the signature shown in embossed form on the back of the sheet?� Was the signature written before the paper was folded?� Is the apparent age of the writing ink used consistent with the date ofthe document?� Does the document contain abrasion, chemical, or pencil erasures, al-terations, or substitutions of any kind?These points only show that handwritten HSV is far from trivial butclearly most of these points cannot be applied to on-line HSV where a per-son's signature is collected on a graphics tablet rather than on paper.Hilton (1992) discusses what a signature is and how it is produced.Hilton notes that the signature has at least three attributes, form, move-ment and variation, and since the signatures are produced by moving a penon a paper, movement perhaps is the most important part of a signature.The movement is produced by muscles of the �ngers, hand, wrist, and forsome writers the arm, and these muscles are controlled by nerve impulses.Once a person is used to signing his or her signature, these nerve impulsesare controlled by the brain without any particular attention to detail.Hilton notes that a person's signature does evolve over time and withthe vast majority of users once the signature style has been established themodi�cations are usually slight. For users whose signatures have changedsigni�cantly over time, and such cases do occur although infrequently, theearlier version is almost always completely abandoned and the current ver-sion is the only one that is used. Only in some exceptional cases has it been9

found that a user may recall an old form of his or her signature, perhapsfor signing special documents. Liu, Herbst and Anthony (1979) found thatin their experimentation with 248 users, three users continually varied be-tween two signatures. This suggests that if a HSV system was to verify suchexceptional cases of more than one signature by an individual, the systemwould need to maintain a list of reference signatures over time.When a user's signature varies over time, should this variation be takeninto account in HSV assuming that the user might be using elements ofa former signature in the current signature? Hilton comments that it ishard to answer this question since in the vast majority of cases the currentsignature is su�cient for veri�cation purposes.4 Computer Signature Veri�cationIn this section, a discussion of the basic methodology of computer HSV ispresented followed by a discussion of signature writing dynamics.4.1 The Basic MethodologyMost techniques for HSV involve the following �ve phases: data acqui-sition, preprocessing, feature extraction, comparison process, and perfor-mance evaluationMost methods, but not all, during the �rst three phases, data acquisition,preprocessing and feature extraction, would generate a reference signature(or a set of reference signatures) for each individual. This normally requiresa number of signatures of the user to be captured at enrollment or registra-tion time (these signatures are called sample signatures) and processed. Inthe discussion that follows it is assumed that only one reference signatureis available. When a user claims to be a particular individual and presentsa signature (we call this signature the test signature), the test signature iscompared with the reference signature for that individual. The di�erencebetween the two is then computed using one of the many existing (or spe-cially developed) distance measures. If the distance is above a prede�nedthreshold value the user is rejected otherwise authenticated. A performanceevaluation of the proposed technique is important of course and normallyresearchers use a set of genuine signatures and forgery attempts either col-lected by them or by someone else, and determine the false rejection rate(FRR) and the false acceptance rate (FAR) for the technique given the sig-nature database. Obtaining good estimates of FAR is very di�cult sinceactual forgeries are impossible to obtain. Performance evaluations therefore10

rely on two types of forged signatures. A forgery may be skilled if it isproduced by a person other than the individual whose signature is beingforged when the forger has had access to one or more genuine signaturesfor viewing and/or practice. A forgery is called zero-e�ort or random wheneither another person's genuine signature is used as a forgery or the forgerhas no access to the genuine signature and is either only given the name ofthe person whose signature is to be forged or just asked to sign any signaturewithout even knowing the name. Tests on random forgeries generally leadto much smaller FAR than on skilled forgeries.Although performance evaluation as described above is essential, theevaluation is not always a true indicator of the performance of the techniquesince the test signatures often do not adequately represent the populationat large. Plamondon and Lorette (1989) note that there is a great dealof variability in signatures according to country, age, time, habits, psycho-logical or mental state, and physical and practical situations. Building atest database of signatures that is representative of real-world applicationsis quite a di�cult task since it is di�cult enough to �nd people that willwillingly sign 10 or 20 times. People are not always happy to have theirsignatures stored in a computer or given to others to practice forging them.Therefore most test databases are built using signatures from volunteersfrom the research laboratory where the HSV research has been carried outand as a result most test databases have very few signatures from peoplethat are old, disabled, su�ering from a common disease (for example arthri-tis), or poor. Percentages of such people in the population is signi�cantand these are the people whose signatures are likely to pose the greatestchallenge for a HSV technique. It has been reported that FAR and FRR aregenerally higher when the systems are used by a more representative group.The higher FRR by a more representative group is not surprising since thesigning environment generally is not quite as consistent as it often is whensignatures for a test database are collected. Furthermore, more signaturesin this wider group are likely to be obtained from people whose signaturesare more variable either due to background or age than for research sta�,students and professors whose signatures are likely to be in a test database.The reasons for a higher FAR in this group on the other hand are not clearsince a more representative group could even lead to lower FAR than thatobtained by using a test database since most test databases have skilledforgeries obtained after the forgers were allowed to practice the forgerieswhich may not always be possible in the real world.It should be noted that a number of other di�culties face a person whoattempts to compare the results of many studies reported in the literature.11

There is no public signature test database that could be used by all re-searchers for performance evaluation and comparison. Most researchers havetheir own test signature databases with a varying number of genuine signa-tures, some have skilled forgeries while others do not, some have screenedthe signature database to remove some signatures that for some reason werenot acceptable while others have done no screening, the number of signa-tures used in building a reference signature often varies, di�erent tests andthresholds have been used and even di�erent de�nitions of FAR and FRRhave been used. Some studies use a di�erent threshold for each individualwhile others use the same threshold for all individuals. This is a rather sadstate of the art in HSV.The primary issue in HSV is of course \what aspects or features of a sig-nature are important?". There is no simple answer to this but two di�erentapproaches are common. In the �rst approach, all of the collected positionvalues (and/or velocity values and/or acceleration values) of a signatureare assumed important and the test and reference signatures are comparedpoint-to-point using one or more sets of these values. In this approach themajor issue that arises is how the comparison is to be carried out. Perhapsthe signatures could be compared by computing the correlation coe�cientbetween the test signature values and the corresponding reference signaturevalues but point-to-point comparison does not work well since some portionsof any two genuine signatures of the same person can vary signi�cantly andthe correlation may be seriously a�ected by translation, rotation or scal-ing of the signature. Another approach might be to segment the signaturesbeing compared and then compare the corresponding segments using somealignment of segments if necessary. This approach works somewhat betterand we discuss it in more detail later.In the second approach, all the available values are not used. Instead, acollection of values are computed and compared. These are sometime calledstatistical features (or statistical parameters) and some examples that havebeen used in the studies that are discussed below are:� Total time taken in writing the signature.� Signature path length: displacement in the x and y directions and thetotal displacement.� Path tangent angles: pro�le of their variation and average or rootmean square (RMS) values.� Signature velocity: pro�les of variations in horizontal, vertical andtotal velocities as well as their average or RMS values.12

� Signature accelerations: variations in horizontal and vertical accelera-tions, centripetal accelerations, tangential accelerations, total acceler-ations, as well as their average or RMS values.� Pen-up time: total pen-up time or the ratio of pen-up time to totaltime.The above list is far from comprehensive. For example, Crane and Os-trem (1983) propose 44 features that include some of the features listedabove as well as several others. In a United States Patent, Parks, Carr andFox (1985) propose more than 90 features for consideration.Once a set of features has been selected, there may be no need to store thereference signature and only the features' values of the reference signatureneed be stored. Also, when a test signature is presented, only the features'values are needed, not the signature. This often saves on storage (storagemay be at premium if, for example, the reference signature needs to bestored on a card) and that is why representing a signature by a set of valuesof its features is sometimes called compression of the signature.In the methods that use feature-based comparison, selection of featuresand extracting the best subset once a set of features has been identi�ed aremajor research tasks. Some of the problems that a HSV algorithm needs todeal with are:� How many features are su�cient?� Do the features require some transformation to be applied to the sig-nature data e.g. resizing, deslanting, time-warping?� How many sample signatures will be used in computing the referencesignature (assuming that it has been decided to have only one referencesignature)?� Would the reference signature be updated regularly? If yes, how wouldthis be done?� How is the distance between a test signature and the correspondingreference signature going to be computed?Some of these issues are now brie y discussed starting with the last one.Assume that the reference signature is based on a set of sample signaturesand for each element of the set of selected features the mean and standarddeviation of the feature values have been computed. Therefore the reference13

signature is two vectors; a vector (R) of the means of the features' values ofthe sample signatures and a vector (S) of the standard deviations. Clearlyto obtain good estimates of the mean and the standard deviations of thefeatures' values of the genuine signatures population it is necessary to haveseveral, perhaps a minimum of �ve, sample signatures. A larger set of samplesignatures is likely to lead to a better reference signature and therefore betterresults but it is recognised that this is not always possible. The number ofsample signatures needed is discussed further in the next section.The distance between a test signature and the corresponding referencesignature may be computed in several di�erent ways. Lee (1992) considersthe following �ve approaches:1. Linear discriminant function.2. Euclidean distance classi�er.3. Dynamic programming matching technique.4. Synthetic discriminant function.5. Majority Classi�er.Linear discriminant function: a linear combination of the componentsof feature vector x, has the general form: G(x) = wtx+ w0 where w is aweighting vector and w0 a constant. w0, also called the threshold weighting,speci�es the boundary between two classes.It should be noted here that algorithms such as the gradient descentalgorithms which require, in general, a large training set are precluded bythe fact that the reference set is generally small. Two particular approachesto linear classi�cation are proposed by Lee. The �rst has each feature valueti of the test signature normalized by the reference mean ri; the secondapproach has feature value ti normalized by the reference standard deviationsi. Therefore, the �rst linear discrimination function becomes:G(T ) = (1=n) nXi=1 ti � ririwhere T denotes the test signature being with feature set (t1; t2; :::; tn).Lee evaluates the mean-value-normalized linear classi�er and obtains anequal error rate of about 17%. The performance of the standard-deviation-normalized linear classi�er obtained a similar equal error rate. Lee also14

showed that as either FRR or FAR approached zero, the opposite errorrates rise rapidly.Euclidean distance classi�er. The Euclidean distance discriminant func-tion is used quite widely, for example see Crane and Ostrem (1983) andNelson, Turin and Hastie (1994). The Euclidean distance metric has thefollowing form: G(T ) = (1=n) nXi=1( ti � risi )2where, as de�ned earlier, T is the test signature and ri and si are, re-spectively, the ith feature's reference mean and reference standard deviation.Lee's performance evaluation of the Euclidean distance classi�er using 42features yielded surprisingly poor results of an equal error rate of approx-imately 28%. Other researchers, for example Nelson et al (1994), reportmuch better results.Synthetic discriminant matching. The use of SDF in HSV was introducedby Wilkinson (1990), and it consists of �nding a �lter impulse response wby solving a series of equations. For further details the reader is referred toBahri and Kumar (1988). SDF performs well in comparison to the earlierpresented classi�ers and Lee obtained an equal error rate of approximately7%.Dynamic programming matching. Put very simply dynamic program-ming matching (DPM) involves minimizing the residual error between twofunctions by �nding a warping function to rescale one of the original func-tions time axis. DPM is further explained by Lee and is investigated in anumber of papers in the literature, for example, see Parizeau and Plamon-don (1990). Using the DPM technique Lee obtained an equal error rate ofabout 13%.Majority Classi�er. The main drawback of the linear classi�er and Eu-clidean classi�er is that the FAR tends to 100% as the FRR approacheszero. Lee explains this as follows. Any single feature unduly in uences thedecision result when deviating far from the mean value, even if the otherfeatures have values close to their means for the genuine reference set. Oneway of alleviating this problem is to use the so-called majority classi�erwhich, according to Lee, is based on the \majority rules" principle. That is,it declares the signature being tested to be genuine if the number of featurevalues which pass a pre-determined test is larger than half the total num-ber of tested features. According to Lee, the majority classi�er achieves anequal error rate of only 3.8% using 42 features. How the majority classi�erperforms on zero-e�ort forgeries is not clear.15

The theoretical basis of HSV using the feature-based approach is nowbrie y considered. This discussion follows standard multivariate normal dis-tribution approach which is discussed in most books on multivariate analysis(for example, refer to Jobson (1992), Section 7.2). It is assumed that an in-�nitely large population of genuine signatures is available for an individualand that the signatures are represented by the values of their features. Letthere be m features which are not always independent. Let f be a randomvariable which is a vector of feature values representing the genuine signa-ture population. f is assumed to be normally distributed with mean vector� and covariance matrix V.Normally, of course, the mean and covariance matrix of the genuine sig-nature population are not known and so the mean vector � is replaced bythe mean reference signature vector R and the covariance matrix V is re-placed by a diagonal matrix that has the squares of the reference standarddeviations (that is, S) and all correlations between the features are ignored.The resulting simpli�ed procedure therefore works as follows. When a testsignature is presented, vector T of values of the features is computed. Thedistance vector D = jR � T j is then computed. The vector D is normal-ized by dividing each value in it by the corresponding standard deviation inthe vector S to obtain vector Z whose norm is then computed. The com-puted norm is now compared to a prede�ned threshold and the signature isauthenticated only if the norm is smaller than the threshold.Given that the population mean and the covariance matrix of the fea-tures' values is unknown and the correlations between the features havebeen ignored, the norm of the vector Z cannot be expected to closely followthe Chi-squared distribution. If the distribution were Chi-square, it wouldbe possible to make accurate predictions for the threshold given the de-sired value for FRR. A simple example is now presented to show that largerthreshold values need to be used in practice to achieve the same FRR asthose which are predicted by the Chi-square distribution. Consider a singlefeature situation. A FRR of 1% corresponds to a Chi-square value of 6.63.Now if six features are used that have correlation coe�cients of 1.0 betweenthem, it is essentially using the single feature six times and therefore thecorrelated six features correspond to six times the value 6.63, that is 39.78which is much higher than 16.8 if there was no correlation.Of course, no prediction can be made about the expected FAR since theforgeries are signed by many di�erent persons and therefore the forgerieswould not have a normal distribution and there is no way to obtain even anestimate of the mean and covariance matrix of the forgeries.The issue of how many features a method needs to use to obtain reliable16

HSV is a very di�cult one. There is always a temptation to include moreand more features in a method in the hope of improving performance and,as was noted earlier, some researchers have proposed more than 90 features.It is believed that using many features is unlikely to lead to high perfor-mance and may lead to some di�culties. For example, if a method is usingmany features, the storage needed to store the values of the features for thereference signature is going to be relatively large and a credit card may nothave su�cient capacity to store all the values. Also, when a test signatureis compared to the reference signature, given that no two genuine signaturesare identical, it is unlikely that a genuine signature will have all features'values close to the values for the reference signature. To ensure that all (oralmost all) genuine test signatures are authenticated, a technique using alarge number of features either must have a large threshold for the normof the distance or use some criterion similar to the majority classi�er ofLee (1992). This majority classi�er is not particularly satisfactory since itcannot be easily analysed theoretically and also using it is in e�ect statingthat although a large number of features are considered important, it is ac-ceptable to ignore several of them in comparing the test signature to thereference signature. This appears contradictory.Regarding the number of sample signatures, the belief was expressedearlier that at least �ve signatures are needed for satisfactory performanceof a feature-based technique. Gupta and Joyce (1997a) have studied thisissue and their work is discussed in the next section.Although this review does not discuss updating of reference signaturesin any depth, it is believed that a reference signature should be updatedregularly as the users signature evolves over time. A simple technique ofperhaps giving 90% weight to the stored reference signature and 10% to thelatest authenticated signature could be used but no studies of such an adap-tive approach have been reported in the literature. Adding some weightingfunction will not have any e�ect on users whose signatures don't change overtime and should have a positive e�ect on users whose signatures do.Finally, is it necessary to transform a test signature before it is comparedwith the reference signature? A number of transformational techniques havebeen proposed, the simplest only suggest smoothing the data but others sug-gest, for example, �nding the bounding rectangle and scaling the test signa-ture to the same size as the reference signature. While various authors haveused transformations, no systematic study of bene�ts of such transforma-tions has been reported in the literature. An example of a transformationaltechnique was that of Phelps (1982) who used a space-domain approach inwhich a close �tting polygon around the signature image is formed. The17

signature area is then normalised and centred on a coordinate plane. Phelpsshows that the area of overlap for a pair of valid signatures is consistentlyhigher than for forgeries.4.2 Dynamics of Signature WritingThe dynamics of the signature is captured by a graphics tablet in the datathat is given by S(t) = [x(t); y(t); p(t)]Tt = 0; 1; 2; :::; nthat is, it is a collection of x,y location values of the pen tip and the pentip pressure values at given times (generally, equal time intervals). Manydevices sample at the rate of 200 times a second (sometimes less) and theresolution of such devices is often about 1000 pixels/inch although somehave �ner resolution. Typical American signatures are a writing of thepersons name and therefore for American signatures the x -values typicallygrow linearly with time with small oscillations on the linear curve while they-values show a more oscillatory variation with time, becoming positive andnegative many times during a signature.The signature data, after appropriate smoothing may be used to computethe derivatives of x,y and even p (assuming that the pressure data is notbinary) if required. The �rst derivatives of x and y are the velocities inthe two directions (which may be combined to compute total velocity, ifrequired) and the second derivatives are of course the two accelerations.One may also wish to compute the third derivatives although these are rarelyused in HSV. The third derivative is the rate of change of acceleration andis sometime called jerk. Once these derivatives have been computed, thefollowing signature data (assuming no pressure derivatives) is available:S0(t) = [x(t); y(t); p(t); xv(t); yv(t); xa(t); ya(t); xj(t); yj(t)]Tt = 0; 1; 2; :::; nIt is clear that every time a person signs his or her signature the numberof samples obtained will be somewhat di�erent, that is n will have a di�erentvalue. This variation in genuine signatures of the same individual makes itvery di�cult to compare one set of values from one genuine signature withanother set from another.A signature may be considered a sequence of strokes. Dimauro, Impe-dovo and Pirlo (1994) de�ne strokes as a sequence of fundamental compo-nents, delimited by abrupt interruptions. Let us consider one such stroke inthe y-direction (assume that the stroke is from a small y value to a large y18

value) and for the moment ignore any displacement in the x -direction sincethat can be treated in a similar way. If the y-velocity during the stroke isnow examined, it is found that the stroke is characterized by a number ofpositive velocity values that are growing and reaching a peak somewhereperhaps mid-way through the stroke and then a number of positive valuesthat are declining. Therefore the velocity pro�le of a stroke is a waveform,perhaps it looks like a bell-shape curve, starting with a zero velocity, reach-ing a peak and ending with a zero velocity. Furthermore, the variation ofthe y-acceleration corresponding to a single peak velocity pro�le of a strokeis studied, it is found that the accelerations during the period of the singlepeak velocity pro�le will grow from zero value to reach a peak value perhapsmid-way during the velocity climb-up to the peak and then decline steadilyto zero at the peak velocity. On the way down, the accelerations will growin magnitude in the negative direction (as velocity reduces) and reach anegative peak around half-way down the curve and then decline steadily inmagnitude (although still negative) to reach zero at the end of the curve. Astroke therefore will look like a single positive peak curve when the velocitypro�le is examined and will look like a two peaks curve, one peak in the pos-itive direction and another in the negative direction, when the accelerationpro�le is examined.Further study of velocity and acceleration shows that the height of thepeak of the velocity will be much smaller than the length of the stroke andthe heights of the acceleration peaks will be smaller still.A rough estimate for the height of the velocity pro�le may be obtainedif the single peak curve is approximated with a triangle since the area underthe curve is equal to the length of the stroke. Let the length of the stroke bed then the peak velocity v is simply given by v = 2d=t if t is the time taken towrite the stroke. By counting the number of peaks larger than some givenheight h in the velocity or the acceleration pro�le, the number of strokesthat are larger than some given length can be found. A skilled forger maymanage to forge the long strokes of the signature, but it is unlikely thathis/her velocity and acceleration pro�les for a long stroke be the same asthat of the genuine signer.The shape of the velocity pro�le of a stroke has been studied by Plam-ondon (1993) who states that the shape of velocity pro�le obtained whena rapid-aimed stroke, like those in a signature is written, is approximatelybell-shaped but the shape is asymmetric and it is claimed that the shape isalmost preserved for movements that vary in duration, distance or peak ve-locity. It may be that this consistent shape is related to the way the centralnervous system plans and controls movement. Plamondon presents a model19

for rapid movement involving an impulse command of amplitude D at timet . The model leads to asymmetric bell-shaped velocity pro�les, that is, thevelocity rises quickly to reach the peak velocity during the stroke writingand then reduces to zero much more gradually. It is suggested that thesepro�les can be explained by log-normal curves. This is related to the Fitts'Law, a greater discussion of which can be found in Fitts (1954). Plamondon,Alimi, Yergeau and Leclerc (1993) compare 23 di�erent models that may beused to describe the asymmetric bell-shaped velocity pro�les of rapid move-ments. The models were compared by using them to reproduce a set of 1052straight lines drawn rapidly by nine individuals. Log-normal models werefound to be the best.Given the dynamics of signature writing, how does a forger forge a sig-nature and how di�cult is it for a forger to produce a good forgery? Asnoted earlier, it is believed that a forger cannot write another person's sig-nature in a ballistic motion without a lot of practice and therefore producinggood forgeries is never going to be easy. However it is also true that somesignatures lend themselves more easily to forgery than do others. It is thiscomplexity of forging of a signature that is studied by Brault and Plamon-don (1989, 1993a) in an attempt to estimate the intrinsic risk of a signaturebeing forged. They note that humans can only remember about seven vari-ations of a pattern without error and so a forger cannot memorize all thedetails of a signature that is being forged. A process of minimization orrecoding of information that needs to be remembered takes place. For ex-ample, a forger might recode information about signature of John Smith by�rst remembering the name and then remembering that John Smith's signa-ture has taller J than the forger would normally write himself and rounderS and so on. Brault and Plamondon assert that a signature is a sequenceof triplets: curvilinear stroke, angular stroke and another curvilinear stroke,half superimposed.Based on these observations, they derive an expression for an imitationdi�culty coe�cient of a signature. Roughly the expression gives the di�-culty of a given signature to be a function of the variation rates in lengthand direction of the strokes. It is claimed that people with small value ofthis coe�cient and high variability between their signatures are the prob-lematic signers. They also recommend personalized thresholds based on thedi�culty of a person's signature but it is not clear how that would helpHSV since the large variation is already taken into consideration when thedistance is normalized by the standard deviation.20

5 Review of Earlier WorkGiven the importance of HSV, the volume of published literature in the�eld is not large. This, it is believed, is primarily due to the high perceivedcommercial value of innovations in the �eld and perhaps the reluctance ofindustry to make public the results of their research and development. Mostcompanies that are carrying out research in this �eld are keen to protecttheir inventions by keeping the research con�dential within the company orby patenting their innovations; in either case much of the work does notget published in the learned journals. This review will include some of thepatents in the �eld of HSV.Some of the early work in HSV is not easily available but has been citedin the papers of Herbst and Liu (1977) and Lorette (1984). The earliestcited work on HSV appears to be that of Mauceri (1965) cited by Herbstand Liu (1977) and Lorette (1984). Herbst and Liu report that Mauceri took50 signatures from each of the 40 subjects in evaluating his method whichused power spectral density and zero-crossing features extracted from penacceleration waveform measurements. A FRR of 37% has been reported.Another study that has been cited is that of Farag and Chen (1972) whohave been reported to have used chain-encoded tablet data and tested themethod on signatures from ten subjects. A FRR of 27% and a FAR of 27%has been reported.The work of Sternberg (1975) who studied HSV using handwriting pres-sure has been cited by Herbst and Liu (1977) and Zimmermann and Varady(1985). Sternberg treated handwriting pressure as analog z axis informationand based HSV on it obtaining a FRR of 0.7% and FAR of about 2% usingrandom forgeries. Later tests, cited by Herbst and Liu (1977), found therates to be 6.8% FRR and 3.2% FAR for random forgeries and 17% FARfor skilled forgeries.Two di�erent approaches to HSV have been used throughout the litera-ture (as was noted earlier). The �rst is based on comparing position, velocityor acceleration values point-to-point, often by computing a set of correlationcoe�cients between the test signature values and the corresponding refer-ence signature values. In the second approach, a number of features are usedto capture the signature information and these features are then compared.There has lately been further research into a third category outside the maintwo, which involves capturing the shape of the signature. The HSV workunder these three categories is reviewed in this section.21

5.1 Point-to-Point ComparisonPoint-to-point comparison is based loosely on the idea of comparing a testsignature with a reference signature by comparing the di�erent parts of thesignature separately and combining these comparisons to achieve an overallsimilarity measure. As noted earlier, the di�culty arises since even thegenuine signatures of one person have a certain level of variation, making adirect point-to-point comparison almost impossible. In order to make moree�ective comparisons, a system must perform some type of alignment of thetest and reference signatures in an attempt to \line-up" the correspondingparts of the signatures. This alignment may in turn create problems of itsown, since forgeries undergo alignment as well as genuine signatures.Herbst and Liu (1977) describe a technique based on acceleration mea-surements and using regional correlations. As noted previously, they com-ment that signature writing is predetermined by the brain and as such, thetotal time taken for writing signatures by an individual is remarkably consis-tent. The authors assert that the component durations should also remainquite consistent and hence the zero-crossings of the pen accelerations.Herbst and Liu use two orthogonal accelerometers mounted on an ex-perimental pen to sample a signature at the rate of two hundred times persecond. Five sample signatures were given by each individual and from theseone or two reference signatures were selected such that the distance betweenthe selected signatures and the remaining signatures is at least equal to aprespeci�ed value. These are supposed to be the `best' reference signatures.The logic of this selection procedure has not been explained. Note that meanvalues of accelerations in the sample signatures were not used as a referencesignature since variations in the signatures can lead to mean accelerationsclose to zero.Herbst and Liu found that most signatures were of 2-10 seconds in du-ration with an average time of about 5 seconds. Each signature was heuris-tically partitioned into segments and corresponding segments were cross-correlated after the segments were modi�ed (aligned) based on the durationof the interval and discrepancies between the test signature and the referencesignature. It was found that segments in the range of 1-2 seconds performedthe best. Longer segments were more di�cult to align due to minor varia-tions within the segments. Often segmentation was based on pen-down andpen-up occurrences within the signature. Another approach to segmentationis to use equal components of the signature (equal according to duration),and examples of this type of segmentation will be discussed later.In the Herbst and Liu system the segments were shifted in the corre-22

lation analysis by up to 20 percent of the reference signature's segmentduration but never more than 300 ms to �nd the best match. A numberof adjustments needed to be made to compute the correlations e.g. extrapen lifts were eliminated and correlation results were weighted to penalizeexcessive shifting, and weighted by the reference signature length of the seg-ment since larger segments are believed to be harder to forge. The techniquewas evaluated by �rst collecting 350 signatures (5 X 70) from 70 users forselecting reference signatures followed by a collection of another 695 genuinetest signatures and 287 forged signatures; a total of 1332 signatures. Beforeapplying the correlation test, the veri�cation algorithm rejects a signature ifit takes time that is more than 20% di�erent than the reference signature. Ifthis test is passed, the correlation test is applied. A FRR of more than 20%was obtained with a FAR of around 1% although, not surprisingly, a muchlower FRR was obtained if a user was able to undergo three trials to get veri-�cation which is unlikely to be unacceptable in most applications. Althoughnot noted in the paper by Herbst and Liu, Zimmermann and Varady (1985)note that 59% of forged signatures never had to undergo a correlation testsince they were rejected on the basis of the signature time being more than20% di�erent than the time for the reference signature.To overcome the limitations of the regional correlation technique ofHerbst and Liu, Yasuhara and Oka (1977) suggest that the reference sig-nature and the test signature be compared using nonlinear time alignment,a technique that has been used in speech recognition. They assert thatnonlinear alignment aligns better as compared to linear time alignment ofsignatures which may lead to local misalignment due to variations in thesignatures. The technique involves building what they call time registrationpaths by plotting the template signature pen force values against the testsignature values. An algorithm is designed which then searches for a pathfor which the Euclidean distance between the template signature and thetest signature is minimized. Now this distance is tested against a thresholdvalue and if below the threshold the signature is veri�ed otherwise it is re-jected. They conducted experiments using students who were asked to writethe word \key" instead of their signatures. The test word was written byeach of the ten subjects ten times over a period of ten days. No signatureswere used. They show that using the nonlinear time alignment leads tolower errors as compared to linear time alignment. Error rates of less than5% appear to have been obtained although much further evaluation of thetechnique obviously is required to assess the performance of the proposedtechnique for HSV.In another attempt to improve the performance of the original technique23

of Herbst and Liu, Liu, Herbst and Anthony (1979) propose using writingpressure in addition to the two acceleration measurements used in the earlierstudy. The authors note that although the pressure waveforms have grosssimilarity from signature to signature, the absolute values depend on humanfactors like the feel of the pen or how the pen is inking. Also, it appears thepressure waveform is not very hard to imitate. It was found that correlationbetween pressure waveforms shows little discrimination since the gross formof the pressure waveform dominates the correlation values, but they foundthat it was more e�ective to remove low frequency paper contact componentsof the pressure waveform. Using the acceleration and pressure correlationsseparately as well as together, they carried out some experiments usingsignatures from 24 subjects and obtained results close to 16% FRR and wellbelow 1% FAR.The technique described above was further modi�ed by increasing thenumber of signatures used in selecting reference signatures to six and alwaysselecting two references for future comparisons. A procedure that appearssomewhat unsatisfactory for selecting the reference signatures is now fol-lowed. The reference signatures are labeled provisional if they do not leadto successful veri�cation the next time the user presents his or her signature,they are modi�ed by addition of three latest signatures. Rejects from provi-sional users were not counted in the performance results. The experimentswere conducted at a large internal data processing center at White Plains,NY, where a total of about 6000 signatures were collected. To collect skilledforgery data, 40 targets were selected and their genuine signatures given toforgers on index cards. Forty imitations from research sta�, each attemptedafter practice, about ten di�erent targets in two di�erent sessions. The sig-natures were analysed o�-line and each user was allowed three signaturesfor veri�cation. The results of this experiment were 1.7% FRR and 0.4%FAR. Zero-e�ort forgery FAR of 0.02% was obtained. The initial resultsof 16% FRR and low FAR were, not surprisingly, reduced signi�cantly byusing three signatures for veri�cation (the FRR would be expected to godown from 16% to 0.4% if the three signatures are considered independentevents). As noted previously, this is not acceptable in most applications.The study found that 3 out of 248 users continually varied between twodi�erent signature styles.Further improvement to the correlation technique is proposed by Lew(1983) who presents an algorithm that allows further transformations to bemade in an attempt to �nd the best correlation between any two given seg-ments. These transformations include translating a segment by a fractionof the sampling interval and using a transformation that takes into account24

writing at slightly di�erent uniform speed within each segment. The perfor-mance of the proposed algorithm was not evaluated.Another approach of computing correlations is proposed by Brault andPlamondon (1984) who describe a HSV system based on using an instru-mented accelerometric pen which sampled acceleration patterns of a signa-ture. The four acceleration transducers give x and y accelerations for thetop and bottom of the pen. Total accelerations at each end and their orien-tations may then be computed. Some suitable features with characteristics(e.g. minimum or maximum value, average, variance) dealing with acceler-ations in direction � may then be computed and each of these is representedas a histogram for � values of 1 to 360.Veri�cation involved building a set of n histograms for each signature,one for each feature. To compare two signatures a four-step procedure isrequired. The �rst step, synchronization, involves alignment of the twohistograms before comparison and choosing the alignment that gives thelargest correlation. The same alignment is then applied to the remaining(n - 1) histograms of the signature to be compared. A preselection test isused to reject signatures that are unlikely to pass the correlation tests byusing a number of global features (e.g. average acceleration values, signatureduration) although the logic of this preselection is not entirely clear. Afterthe preselection is passed, a comparison of histograms for each value of � iscarried out by computing the `likeness coe�cient' for each value of � andthen computing the average value. A decision is then made on the basisof a predetermined threshold. A limited evaluation of the technique wascarried out using a database of 243 signatures from 50 people, most signing�ve times, using histograms of average accelerations, the number of samplesand a feature obtained by multiplying the two. A reference histogram foreach person was obtained by averaging the histograms from the �ve (or thenumber supplied) signatures for that person. There were no forgeries inthe signature database and signatures of other people were used as zero-e�ort forgeries. A set of threshold values that gave the best results wereused and a FRR of 1.2% and FAR of 1.0% was obtained, but these resultsare meaningless as the test signatures were used in building the referencesignatures. It is not clear how the threshold values were chosen and itappears that di�erent threshold values for each test of each person mighthave been chosen.The performance of correlation techniques and techniques based on sta-tistical features is discussed by Lamarche and Plamondon (1984). They notethat when a signature is described by features like mean and peak speeds andtotal duration of signature, a good compression of the signature is obtained,25

but better results are likely to be obtained if all of the data of the signaturewas used (i.e. position, velocity and acceleration) and then correlation be-tween the reference signature and the test signature was computed. As dis-cussed earlier, success of techniques that compute correlations requires thatthe signature be divided into segments, corresponding segments be aligned,and correlations between corresponding segments be computed. Lamarcheand Plamondon study the problem of segmentation based on a model ofhandwriting that suggests that signature writing may be represented by asimple formula. It claims that each signature consists of transient and steadystate response portions, where transient portions exhibiting an exponentialbehavior and steady state portions are straight lines. A sample signaturedatabase is used to analyse signatures and identify transient portions from acumulative scalar displacement curve. A signature may now be representedby a sequence of steady and transient portions each pair corresponding toa pulse. Each segment may now be associated with a di�erent input pulseand rather than storing signature position data, it may be possible to storestarting time, duration and amplitude of each pulse although informationon segment orientation and curvature may also be needed. The technique, itappears, does not work well with continuously curved portions of signaturessince it is then di�cult to identify a segment.The performance of a method using only the most basic informationabout the signature is evaluated by Zimmermann and Varady (1985) whostudy only the pen-up and pen-down timing information contained in a sig-nature and use this information for verifying signatures. They assert thatsince total time taken to sign is very precise and repeatable, the timinginformation obtained from the pen-up and pen-down must be very discrimi-nating. They capture the pen-up-down information from a tablet and ratherthan carry out a point-by-point comparison they apply Walsh transforma-tion to obtain a frequency spectrum for the signature. The �rst 40 lowfrequency harmonics are retained. To evaluate the technique, a database ofnine groups of ten genuine signatures, collected as two subsets of �ve gath-ered several weeks apart, is used and it appears that no skilled forgeries werecollected. A variety of tests are conducted using variable numbers of signa-tures in the reference set, some using all genuine signatures of an individualto build a reference and then testing the same signatures for computationof FRR. When the sample signatures were not included in the set of testsignatures (a more realistic approach of performance evaluation), the FARvaries from a low of 4.6% to a high of 12.0% while the FRR varied from a lowof 33.6% to a high of 51.9%. The FAR seems to correspond to zero-e�ortforgeries since no skilled forgeries have been collected. These results are26

interesting since they are obtained from very limited information from thesignature and the results show that low (zero-e�ort) FAR may be obtainedby using limited information.Given that some point-to-point HSV techniques proposed in the litera-ture have used position data while others have used the velocity or the ac-celeration data. Plamondon and Parizeau (1988) compare the performanceof the three types of data, position, velocity and acceleration, using threetechniques viz. regional correlation, dynamic time warping and skeletal treematching. The regional correlation method is similar to that proposed byHerbst and Liu (1977) presented earlier although some of the details, forexample how the weights are computed, are not presented. Each 0.7 secondpiece of the signature was used as a segment based on a claim that hand-writing signals tend to fall out of phase beyond 0.7 seconds. Time warpingis a nonlinear correlation technique borrowed from speech recognition and isused to map segments from a test signature to segments of the correspondingreference signature by removal of unwanted timing di�erences. The distancebetween two samples can then be computed. Tree matching is a techniquethat involves building a tree of peaks and valleys of a signature. Once thetrees corresponding to the test and reference signatures are available, thedistance between them may be computed in terms of minimum number ofoperations needed to transform one tree to the other. Few details have beenprovided.Each of the three techniques uses all the values of position, velocitiesand accelerations along the x and y directions and no statistical featuresare used. The study used velocity and acceleration values (in addition toposition values) by computing them using an eleven-coe�cient FIR �lter.The test database consisted of 50 signatures from each of 39 studentand professor volunteers, equal numbers of males and females with 18%left-handed persons. Data was collected in �ve sessions, usually one sessioneach day for a week. Signatures were signed along a line parallel to thehorizontal axis. On the average 48 signatures remained for each signer, aftervisual inspection of the data which resulted in some signatures being rejected(the exact reason for rejections is not reported). No skilled forgeries werecollected and random forgeries were used to determine FAR. The procedureinvolved selecting one reference signature from the database of signaturesand comparing it to T other signatures of the same person selected randomlyand T signatures from signatures of T other people, one from each selectedrandomly. Assuming that it is easiest to forge global signature size, thepen tip position values of forgeries were scaled to �t in the same minimumbounding rectangle as the corresponding signature. This was done before27

computing the velocity and acceleration values and the impact of scaling onthese values is not clear. Total error rates (FRR + FAR) averaged over thethree types of algorithms using the six sets of data values are presented. Itappears that the data in the y direction were more discriminating (whichcould well be, as noted earlier, because most American signatures tend tohave much more variation in the y-direction than in the x -direction) andFAR with random forgeries varied from an average of 1.9% to 8.1%. Theuse of velocity data appears to give better results than displacement whichwas better than acceleration. The experiments used a di�erent thresholdfor each test by selecting a threshold value which minimized the total errorrate for that test.Further experimentation shows the e�ect of a number of parametersused in the three methods. For example, the e�ect of permitted time lags inregional correlation is not signi�cant while the length of segments is moreimportant. Better results were obtained for 1.5 second segments if y(t) valueswere being compared but could well be di�erent for di�erent test databases.The e�ect of window length in time warping was also not signi�cant.The results of the same experiments that were used to compare the sixsets of data values, it appears, are now presented in Parizeau and Plam-ondon (1990) to compare the three types of HSV techniques (viz. regionalcorrelation, dynamic time warping and skeletal tree matching). Althoughthe earlier results included tests using only signatures, results of using 50signatures, 50 handwritten passwords and 50 initials from each of 39 studentand professor volunteers are now presented. Data was collected as describedabove and the comparisons were based as discussed above on pen tip po-sitions, velocities and accelerations. Again the procedure described abovewas followed using one reference signature and T genuine signatures and Trandom forgeries. Total error rates of between 3% and 17% were observedfor the three techniques and no technique was globally superior than theother two although regional correlation was often better and much fasterthan the other two. The results are encouraging given that only one refer-ence signature was used but the experiments used a di�erent threshold foreach test by selecting a threshold value which minimized the total error ratefor that test. This makes it almost impossible to compare these results withthe results of other studies where the same threshold is used.In a US Patent, Bechet (1990) proposes a method in which a referencewriting (or a reference signature) is compared with the test writing usingthe x and y velocities. The method involves dividing the velocities signalsinto discrete time segments and then comparing the corresponding velocitysegments. It is noted that the signature velocities are indisputably char-28

acteristic of the individual in spite of the variations in the signatures of aperson. Another technique described by Crane and Ostrem (1983) involvesshifting and scaling the signature so that maximum correlation is obtainedbetween the test signature and the reference signature. Bechet notes thatthis correlation criteria for HSV is inadequate since two people making thesame signature produce similar velocity pro�les. This, however, has notbeen found to be true in other studies.Bechet notes that it was observed that each signature consists of a num-ber of segments where the number is predetermined for each person's signa-ture. Random signals were found to occur between the segments and someparts of the segments were found to be missing. The variations in segmentsincluded variations in positions of the segments and their duration.The technique proposed uses 600 ms segments that are standardized andthen compared. It appears that a value of o�set is obtained for which thesum of the absolute di�erences between the velocities of the reference sig-nature segment and the corresponding test signature segment is minimum.This is followed by a procedure that is somewhat di�cult to understand butseems to involve multiplying each velocity value of the segment by a timefactor in the range of 0.75 to 1.25 and selecting the factor for which thedi�erence between the reference segment and the test segment is minimum.Once the o�set and the time factor for each segment has been computed,the segment is then \standardised" by shifting and scaling it using the twofactors. A total distance between the test signature and reference signatureis now obtained using the reference signature and these standardised seg-ments, and a decision is made regarding acceptance of the test signaturebased on the number of segments that pass a threshold test.5.2 Feature Values ComparisonA large number of features are studied by Crane and Ostrem (1983) whouse a strain-gauge instrumented pen to sample three forces of the writingtip (viz. downward force and the x and y forces) and then compute valuesof a number of features. As noted earlier, initially the experiment con-sisted of 44 features that included the following: scaled means, standarddeviations, minimum and maximum values, average absolute, average pos-itive, average negative, numbers of positive and negative samples, numberof zero-crossings, maximum minus scaled mean, and maximum minus mini-mum. It appears, although it is not quite clear, that the signature databasethat was used for performance evaluation of the technique was �rst usedfor selecting the best subset of features and some 23 features were then se-29

lected as the `best' subset. To evaluate the proposed technique, a databaseof 5220 genuine signatures, half signed while sitting down and the other halfwhile standing at a counter, from 58 subjects consisting of equal numbers ofmen and women and a representative range of left handers as well as heightweight and age, was collected over a four-month period and 648 forgeriesfrom 12 forgers that were allowed to practice the signatures to be forged.Prizes were awarded for best forgeries.The testing consisted of an enrollment phase in which 10 or 12 samplesignatures were collected and a reference vector of features was formed bycomputing the mean and standard deviation of each feature for the given setof sample signatures. The test signature was then compared to the referencesignature and the Euclidean norm of the distance vector was computedwhich, if small enough, authenticated the signature otherwise rejected it.The authors allow up to three trials for HSV and a false rejection occursonly if all the three signatures fail the veri�cation test. This is unlikely tobe acceptable in most applications as noted earlier. Several sets of resultsare presented but it is very hard to interpret these since the de�nitions ofFRR and FAR are not what are normally used in the literature. Also, insome tests the experiments removed some of the subjects that had very highvariance in their signatures (3 out of 58 subjects failed enrollment!). FARand FRR varying from 0.5% to about 3% are reported but all experimentsappear to have allowed three tries for veri�cation.The reference signature was continually modi�ed when a signature wassuccessfully veri�ed by adding the values of the new signature to the meanreference vector with a weight of 1/8.Crane and Ostrem also discuss the possibility of using personalized fea-ture sets for each person rather than using the same features for all persons.Some evidence is presented that personalized feature sets can improve theperformance of a signature veri�er.Lorette (1984) uses the following seven geometric and dynamic featuresthat are claimed to be invariant under rotations and magni�cations:1. number of connected components (i.e. one plus the number of pen-ups)2. number of loops3. quanti�ed cumulated phase for signature on their whole4. initial direction of track pen coded in four quadrants5. total duration (it is not clear if this is invariant under magni�cationas claimed) 30

6. duration of connected components (total time - pen-up time)7. mean and max velocities in connected componentsAll the variables were normalized to have a mean of zero and standarddeviation of one and the distance was then computed using Euclidean normof the di�erences. A database of 203 signatures from 14 volunteers (15 sig-natures from almost every volunteer) was used for evaluating the technique.Lorette claims that 92% of total variance of the test signatures was ac-counted by three principal component factors. The data was classi�ed usinghierarchical classi�cation using only �ve signatures of each person. The clas-si�cation led to 14 classes although it is not quite clear if the classi�cationmethod was guided to 14 classes or if the system itself discovered the num-ber of classes. The remaining ten signatures were then assigned to nearestclasses and this resulted in 91.7% correct classi�cation. An iterative pro-cess was then used to improve the classes and this resulted in 93.6% correctclassi�cation. The details of improved classi�cation are not clear and noforgeries were tested. The author does not discuss the bene�ts of classi�-cation and whether similar or better results could have been obtained byusing the �ve sample signatures for each person and building the person'sreference signature and then comparing the test signature with it.Parks, Carr and Fox (1985) in a US Patent discuss equipment designedto capture the signature as well as using features for HSV. Initially thefollowing six features are suggested:1. Total elapsed time2. Time in contact3. Number of segments4. Sum of increments in the x direction5. Maximum increment in the x direction (that is, maximum x -velocity)6. Sum of increments in yIt is proposed that a reference signature consisting of means and stan-dard deviations of the features be used and at least six sample signaturesbe collected. It is noted that if six sample signatures are gathered underidentical conditions, the standard deviations might be too small to be anaccurate estimate of the standard deviations of the persons signatures. It31

is therefore proposed that the standard deviations be modi�ed to be themeans of the standard deviations of the individual and the standard devia-tions of the population at large. The resulting values should be used if theyare found to \conform to what experience shows to be realistic limits". Ifthe values are not \realistic" further sample signatures may be obtained andsome of the previously obtained sample signatures may be discarded if theywere widely inconsistent with the other sample signatures. In some cases,the whole attempt to obtain a reference signature may be aborted and a newset of sample signatures obtained at another occasion! Once a satisfactoryreference signature has been obtained, the test signature is obtained andcompared and the distance between the two is computed. The details ofdistance computation are not clear but one of the methods proposed recom-mends that each feature value be compared with the mean reference valueand the di�erence in terms of standard deviations be obtained. It is thensuggested that this be multiplied by a weighting factor of 1, 2, 4 or 8. Thisis then compared to two thresholds, a high score is allocated (and test sig-nature rejected) if the scaled di�erence is above the larger threshold and azero score is given if it is below the lower threshold, otherwise a score that isthe di�erence between the scaled di�erence and the lower threshold is given.The sum of all scores is then compared to another threshold that is used inmaking a decision on acceptance.A number of other suggestions have been made including di�erent val-ues of thresholds for di�erent individuals and the possibility of basing thethreshold on the value of the merchandise being bought and the credit ratingof the person. Suggestions are also made about updating of the referencesignature by applying a weighting of 90% to stored parameters and 10% thenew ones obtained from the test signature. The patent also includes a list of93 features that may be used in addition to the six already proposed. It issuggested that an optimum number of parameters is between 10 and 20 buthigher number may be desirable in some situations and somewhat di�erentparameters may be used in di�erent instances of the same HSV system.De Bruyne and Forre (1985) proposes a set of 18 holistic (i.e. global)features including six dynamic features and other static features such as thefollowing (dynamic) signals:� total time,� number of pen lifts,� writing time, 32

� pen-up time,(pen-up time + writing time = total time, therefore all three featuresshould not be used) as well as the maximum writing velocity and the timeat which this velocity occurs. The static features include the following:� area,� proportion,� standard deviations of x and y values,� ratio of total displacements in the x and y directions,Reference signatures were computed using 10 sample signatures. Thetest signature values are compared with those of the reference signaturefeatures' values as well as with those of the forgers and a maximum likelihoodtest is applied. Another approach for comparing signatures is proposedthat involves giving a grade for each feature. Grades are between 1 and6 depending on the distance of the feature value from the reference meanvalue scaled by the standard deviation.In another US Patent, Asbo and Tichenor (1987) describe a pen or stylusfor capturing the signature which is claimed to be such that it permits theperson to grip the stylus in any position with respect to his or her hand. Thepatent discusses capturing the signature samples and their normalization bythe `normalization angle' which is supposed to represent the average direc-tion of the signature. Essentially this is rotating the signature. Little detailsare provided of how HSV will be carried out once these normalized data val-ues have been obtained but the patent description notes that the standardprocedure of building a reference signature using a number of features couldbe used.Lam and Kamins (1989) study HSV based on Fourier transform of thesignature (after considerable preprocessing) and then use the 15 harmonicswith the highest frequencies for each signature for veri�cation. The studyunfortunately does not evaluate the proposed technique well since only onegenuine signature writer was used and 19 forgers tried to forge his signature.This limited evaluation leads to a FRR of 0% and a FAR of 2.5%. Also,the proposal to use the 15 harmonics with the highest frequencies requiresthat information about the harmonics be stored in the reference signatureand it may be best to use only the ten or twenty lowest harmonics. Also,the authors make no use of the velocity and acceleration information which33

should be just as useful as the positional information if not more so. Theapproach has potential but much further work is needed.Hastie, Kishon, Clark and Fan (1991) describe a model in which a testsignature is assumed to consist of a reference signature which is transformedfrom occasion to occasion. The authors describe a �ve-step method for HSV.The steps are:1. Smoothing - a cubic spline approximation is used to average out themeasurement errors.2. Speed - speed is computed after smoothing3. Time Warping - a time warp function is computed so that correspon-dence between the reference signature and the signature being veri�edis found.4. Segmentation - the signature is segmented using low speeds regions(low speed is 15% of mean speed) into a sequence of segments calledletters.5. Averaging - estimating the reference signature.The test signature is processed using steps 1 to 4 above. Distance be-tween the test signature and the reference signature is found at the end ofStep 3 and, if a decision is not made there, at the end of Step 4. The methodis demonstrated using data recorded from a graphics tablet which capturedthe (x,y) coordinates as well as the downward pressure on the pen.Nelson and Kishon (1991) present results of using the analysis describedin the paper by Hastie et al (1991). Ten samples of genuine signatures fromeach of 20 subjects and a number of forgeries for four of the 20 subjects wereused for testing the proposed dynamic HSV technique.Nelson and Kishon describe how they compute pen velocities and accel-erations in x and y directions using spline-smoothing and further computa-tions of path velocities, path accelerations as well as path tangent angles,tangential accelerations, centripetal accelerations, jerks and curvatures. Theauthors note that both the pen pressure and speed have highly repeatablepatterns in valid signatures of a person. The authors present results of plot-ting path length versus signature time in the hope of �nding clusters forvalid signatures that are quite separated from forgeries. It was found thatsuch separation did not always exist. An interesting point is made that theshape and dynamics of a signature might play complementary roles in HSV34

since if a forger is trying to get the shape right, he is unlikely to get thedynamics right and vice versa.The features used in this study were:� Signature time� Signature path length� Root mean square (RMS) speed� RMS centripetal acceleration� RMS tangential acceleration� RMS total acceleration� RMS jerk� Average horizontal speed� Integral of centripetal acceleration magnitudeThe authors describe a two-stage veri�cation scheme in which a �rstscreening stage is used to reject those signatures that are obviously notclose the reference signature. The four best measures from the above listwere selected for each user and these were used for the screen test whichappears to have worked well for the given data. The second stage is notdiscussed in this paper.Mighell, Wilkinson and Goodman (1989) discuss an o�-line system fordetecting casual (i.e. non-professional) forgeries in which the forger hashad no opportunity to practice the signature being forged. The system isevaluated using 80 genuine signatures and 66 forgeries but most results arepresented for only one person. The signatures were collected on cards whichwere then scanned into the computer followed by thresholding to producebinary images that are centred and normalized to �t into a 128 X 64 matrix.A backpropagation learning algorithm is employed using a training set of 10genuine signatures and 10 forgeries for one person and the testing is doneon 70 true signatures and 56 forgeries. A FRR of 1% with a FAR of 4%is reported and lowering the threshold resulted in 0% FRR and 7% FAR.Although the results for one person are impressive, they cannot be expectedto apply to a larger population. Also, the performance of the techniquedegraded remarkably if no forgeries were available for training.35

The above work forms part of a PhD thesis by Wilkinson (1990) in whichhe studies o�-line HSV with a focus on detecting casual forgeries. He uses atest database of 1190 images including 590 genuine signatures from nine sub-jects each contributing 50 to 70 samples collected over a period of 18 monthsand 396 forgeries from 44 volunteer forgers who were shown the name of theperson whose signature was being forged but not the actual signature. Theimages were scanned into computer and considerable preprocessing followed.One of the techniques proposed involves building a slope histogram of a sig-nature using 20 valid signatures and 10 forgeries for each of the nine subjects.By building this slope histogram, Wilkinson attempts to exploit the regular-ity of length and curve of the signature. Overall signature shape at variousangles is evaluated to form a histogram. Histograms are then passed to aclassi�er which compares the histograms to those constructed from a smallnumber of valid signatures. Wilkinson reports an equal error rate averagingabout 7%.The second approach presented in Wilkinsons paper is the syntheticdiscriminant function (SDF) approach. This technique selects a linear �lterwhich produces a speci�ed output for each image of a training set. If forgeriesare included in the training set, error rate is reduced to 4%. These twomethods are then combined to form an integrated system, which results inan average error rate of approximately 1% on casual forgeries and 5-6% onskilled forgeries.In another PhD thesis, Pender (1991) explores the use of neural networksfor detecting casual forgeries. The training of neural networks requires gen-uine signatures as well as forgeries although signatures of other people maybe used as forgeries. A database of signatures was created, in which staticsignature features were collected from �ve individuals over two years. It had380 genuine signatures and the same �ve individuals signed 265 forgeries inwhich the individuals knew the name of the person whose signature was be-ing forged but had not viewed a genuine signature. As in the earlier work,the signatures were scanned and processed reducing them to 128 X 64 size.A FRR of 3% and FAR of zero-e�ort forgeries of 3% has been reported.Lee (1992) in his thesis aims to design a simple on-line system yieldinggood performance for point-of-sale applications. Lee developed a compre-hensive database of 5603 genuine signatures from 105 human subjects and4762 forgeries for the 105 subjects. 240 of the genuine signatures were `fastsignatures' in which nine subjects were asked to sign as fast as they couldresulting in writing times to be 10% to 50% lower than their normal writ-ing time. 210 of the genuine signatures were obtained from seven subjectsstanding up, each of them providing 20 signatures when signing at a height36

of 90 cms and another 10 at a height of 60 cms. The forgeries consistedof random, skilled and timing forgeries. Random forgeries were collectedwhen the forger knew the name and spelling of the genuine signer but noknowledge of the genuine signature, skilled forgers were provided two sam-ples of genuine signatures on a piece of paper and told that the veri�er usesdynamic features of the signature and were allowed to practice on paper andtablet. The practice time varied from 3 to 20 minutes. The timing forgerswere given information about the shape of the signature as well as averagegenuine signature duration and allowed to practice on the tablet with timingfeedback provided during practice. Each genuine signature is forged by twoforgers, each forger contributes to all three types of forgeries and six samplesof each type of forgery are collected from each forger. This yielded 3744 forg-eries (however 105 X 2 X 18 = 3780 it is not clear why 3780 forgeries werenot obtained). Further forgeries were collected for 22 randomly selected ofthe 105 individuals. Eight di�erent individuals provided six skilled forgeriesof each of the 22. These provided 792 forgeries (although there should havebeen 22 X 8 X 6 = 1056 forgeries). It is assumed that some of the forgerieswere rejected, but it is not clear on what basis. The database consisted ofone subject that provided 1000 genuine signatures and for this person 325skilled forgeries from 13 individuals providing 25 forgeries each were col-lected. A subset of the database is designed for all experimentation. Thisconsisted of 22 subjects each providing 11 genuine signatures (six for eachwere used for the reference signature and �ve for testing) and 704 forgeriesfrom eight forgers for each individual, each providing 4 forgeries for eachperson. It is not quite clear how this subset was selected and why such asmall subset was chosen from a much larger set that was available. An equalerror rate of 3.8% was reported.Lee describes a set of 42 features which include 13 static features. Anumber of features relate to the velocity of signature writing e.g. average andmaximum velocity, durations of negative and positive velocities in the x andy directions and the averages as well as di�erences between the maximumand averages for each of the four.Lee investigates a number of problems in his thesis. These include �ndinga small subset from the 42 features that performs better than the whole set.He studies techniques for selecting this small subset if no forgery data isavailable and if it is. He also studies how much performance improvement isobtained if forgery data is used in determining the subset of features. Theabove subsets are di�erent for di�erent subjects but Lee also investigatesthe possibility of using the same common subset for all users.Lee also discusses a number of techniques for feature selection. In one of37

the techniques that uses only genuine signatures, for each feature, the meanfor subject i is compared with the means of the same feature for all othersubjects and the minimum of such distance is computed for each subject foreach feature. Importance of each feature for an individual is then given bythis \maximum" distance. This algorithm is used to �nd several subsets foreach user; 34-features performed better than 42-features.A similar approach is used when forgery data was available for eachsubject. The distance computed now was between the ith feature of thesubject and corresponding feature of the forgeries for that individual. Itwas shown that 23 or 24 features gave the best performance.These two experiments seem to indicate that Lee did not appreciate thedi�erent aims of his two experiments. The �rst experiment is trying to �nda subset to minimise the zero-e�ort FAR while the second is minimising theskilled FAR.Lee also attempts to �nd a common subset of features that is identicalfor all subjects. The approach used involves �nding best subsets for individ-uals as described above and then ordering features based on the frequenciesof their occurrence in the individual best subsets. The basis for this ap-proach is not provided and it is suspected that the approach may not leadto best overall subset. The performance of best 10 common features wassigni�cantly worse than that of 10 individual features for each subject. Theindividualised subsets of features are also tested on a larger database thathad 1485 genuine signatures and 608 forgeries (how these were selected isnot made clear) in addition to 165 and 152 respectively that were used in thesubset selection and for reference. 1% FRR and 14% FAR were obtained.The system was also evaluated on one individual who had signed 1000 times;0.3% FRR and 17% FAR were obtained. Furthermore, the system was eval-uated to verify signatures that were written quickly; the performance wasmuch worse although normalising the features vector improved the perfor-mance.Chang, Wang and Suen (1993) present a technique based on BayesianNeural Network for dynamic HSV of Chinese signatures. A set of 16 featuresis used which include the following:� Total time� Average Velocity� Number of segments� Average length in the eight directions of the signature38

� Width/height ratio� Left-part/right-part density ratio� Upper-part/lower-part density ratioThe database consisted of 800 genuine signatures from 80 people and200 simple and 200 skilled forgeries by 10 forgers. Results show about 2%FRR, 2.5% skilled forgeries FAR and 0.1% zero-e�ort FAR. Unfortunatelythe paper does not present any details of the experiments regarding how thereference signature was computed, how data was collected, etc.Mohankrishnan, Paulik and Khalil (1993) propose a HSV method basedon an autoregressive (AR) model that treats the signature as an ordering ofcurve types. A database of 58 sample signatures from each of 16 subjectstaken over six sessions was used for testing. No skilled forgeries were avail-able but random forgeries were used. Each signature was divided in eightsegments and each segment modeled by an AR model. Three elements wereused for each segment giving 24 features for each signature. Total errorrates using threshold values that gave equal FAR and FRR for each uservary from a low of 7.92% to a high of 21.83%.Darwish and Auda (1994) present a comparative study of 210 featuresthey claim have been previously studied in the literature for static HSV inan attempt to �nd the best feature vector for HSV of Arabic signatures. Themethodology followed was to calculate the mean, standard deviation and thespread percentage for every feature for every person in their database andselect those features that have spread less than 25% for all persons. Someof the selected features were found to have very small interclass variationsand were excluded. The rationale for using this approach is not presented.The set of 210 features was reduced to 12. A signature database of 144signatures from 9 signers was used; 8 samples for each signer were used forbuilding the reference feature set and the remaining 8 were used for testing.Signatures were collected on paper at di�erent times of the day and thenscanned. A fast backpropagation neural network method was used as theclassi�er. A FRR of 1.4% is reported. No skilled forgeries were availableand no zero-e�ort forgeries were tested.Fairhurst and Brittan (1994) discuss issues related to selection of bestfeatures by generating combinations of likely features and a metric by whichto assess the relative merits of the features and thus select the best set. Theauthors discuss a simple approach of selecting the best features based onindividual performance of each feature but that assumes that the features areindependent which is often not true. Another technique, called Sequential39

Forward Selection (SFS), is recommended. In this technique, starting fromn features (n could be zero), usually one feature at a time is added so thatthe feature added improves the performance the best. The process can beterminated if a prespeci�ed performance is achieved or a prespeci�ed numberof features is obtained. The authors then describe a parallel approach to�nding the best set of features since the SFS approach and other similarapproaches are computationally intensive. The paper appears to promotethe use of individual set of features and individual thresholds for each person.Nelson, Turin and Hastie (1994) propose a set a set of 25 features whichinclude two time-related features, six features related to velocities and accel-erations, four shape-related features, eight features giving the distributiondensity of the path tangent angles and four giving angle-sector densities ofthe angular changes and a feature relating to the correlation between the twocomponents of the pen velocity. They discuss the statistical basis of HSVand then use three di�erent methods for computing the distance between thereference signature and the test signature viz Euclidean distance method,Mahalanobis distance method and the Quadratic discriminant method. Asimple method of feature selection in described which essentially consists ofcomputing the ratios of the standard deviation to the mean for each featureand rank-ordering the features according to this ratio. Although this ap-pears to be quite a reasonable approach, there is no guarantee that a featurewith small normalized standard deviation would provide good discrimina-tion between the genuine signatures and forgeries. A variety of schemes areevaluated e.g. using individual best 8, 10, 12, 14 of the 25 features usingEuclidean distance model. The performance of all these sets is similar al-though the individual best 8 and 10 seem to perform the best near FRR ofzero. The Mahalanobis distance measure is used with the individual bestten features but the performance is not better than using the Euclidean dis-tance. The Quadratic distance model is also evaluated. This model requiresgenuine reference signatures as well as forgery data during training but theresults are better than the Euclidean or Mahalanobis models. Nelson et alalso test the majority voting model that Lee (1992) found to be the bestbut Nelson et al found the majority model to give results that were worsethan the Euclidean distance model. Using a Euclidean distance method,they �nd best 10 of the 25 features and obtain 0.5% FRR and a 14% FARwhich they consider satisfactory for credit card applications.Plamondon (1994) presents a multilevel HSV system that uses globalfeatures as well as point-to-point comparison using personalized thresholds.Global features used include total pen-down time, the percentage of pen-uptime, the percentage of time when the angular velocity is positive etc and40

these features are used for the �rst stage of the veri�cation. The signatureis normalized using rotation and scaling and local correlations are computedbetween portions of the test signature velocity values with the correspond-ing values of the reference signature using segments alignment using elasticmatching. This second stage is followed by a third stage involving calcu-lation of variations between the normalized coordinate values of the testsignature and the reference signature using local elastic pattern matching.For evaluation, three signatures from each of eight individuals are used andeight other people provided three forgeries for each of the eight genuine sign-ers in 64 sessions after having access to genuine signatures and informationon the dynamics of these signatures was provided as a sound sequence. An-other set of genuine signatures were obtained from six other subjects, eachproviding nine signatures, three of which were used as reference signatures.Tests were performed with the two databases for adjusting the discriminat-ing function to minimize errors. It therefore appears that test signaturesand reference signatures were used in deciding individual thresholds thatminimised errors. Therefore the results obtained, FAR of 0.5% and FRR of0.0%, cannot be considered reliable. Also the testing is very limited and thenumber of signatures tested was small.More recently, Gupta and Joyce (1997a) have proposed HSV algorithmswith the aim of using a set of features that are simple and easy to compute,invariant under most two-dimensional transformations e.g. rotation, slant,size, more global in nature than local since local variations in signature arecommon, and �nally few in number and certainly less than 10 in all.They note that it is the dynamics of handwritten signatures rather thanthe shape that is more important. They use the following six features in theinitial experiments:1. Total time2. Number of velocity sign changes in the x-direction3. Number of velocity sign changes in the y-direction4. Number of acceleration sign changes in the x-direction5. Number of acceleration sign changes in the y-direction6. The total pen-up timeThe experiment used the following procedure:41

Step 1: First, for each user, a reference signature is built by comput-ing the mean and standard deviation of the values. The vector of thesemeans and standard deviations are R and S respectively and are used asthe reference signature.Step 2: Now the reference signatures are discarded and the remainingsignatures in the database were tested as follows.Step 3: For each test signature, a value for each of the features wascomputed. The vector of these values is T.Step 4: The di�erence between the vector T and the mean referencesignature vector R was computed. The di�erence vector was normalized bydividing each element of the di�erence vector by the corresponding elementin S. Let this normalized vector be Z.Step 5: A norm of Z was now computed. Initially the L1 norm or thelargest absolute value was used.Step 6: If the norm of the di�erence vector was larger than a prede�nedthreshold, the test signature was rejected, otherwise accepted. The FRRand FAR were then computed using all the genuine signatures that did notparticipate in building the reference signature and the forged signatures inthe database. In the initial experiments no random forgeries were used.Their aim of achieving a FRR of 0% or very close to it was achieved byusing the above features and using ten sample signatures for computing thereference signature, although it may not be possible to obtain ten samplesignatures in some applications. The FRR was higher with fewer samplesignatures.It is shown that time by itself is the best single discriminator and byitself is able to provide a FRR of only 2.6% with a FAR of 16.6% for thegiven database. The pen-up time is also a good discriminator and using thisattribute by itself they obtained FAR of 6.1% with a FAR of 20.9%. Thenumber of sign changes in the x -acceleration and the y-acceleration gave aFRR of about 2% with a FAR of about 28%.Further experimentation was carried out to evaluate how many samplesignatures need to be used and how this number a�ects the performance ofthe method. The results from this experimentation seem to indicate thatthree sample signatures for building the reference signature are not su�cientand at least �ve should be used while seven or ten would be better.Including path length in the set of attributes improves the performanceof the technique and good results were obtained when path length is includedand the reference signature is built using 10 sample signatures. A FRR ofabout half-a-percent is obtained with a FAR of little more than 10%.Details about the values of the various features for the genuine signatures42

and the forged signatures are presented and it is shown that feature valuesfor most forged signatures are quite random and therefore some by chancewill happen to be close to the reference mean. Fortunately though if a forgedsignature has a feature value close to the reference mean for one feature, itis often not close to the mean for another feature.There are two important points arising from the Gupta and Joyce studythat should be stressed:1. A surprising number of forged signatures have feature values that aremore than twenty standard deviations away from the reference signa-ture mean. Many are even more than 50 standard deviations awayfrom the reference signature mean. This would not have been sur-prising for random forgeries but all these forgeries are skilled forgeriesproduced by volunteers who had practiced forging the signatures.2. Feature values of many forged signatures were far away from the ref-erence signature means for the following features: total time, the ac-celeration sign changes, the pen-up time, path length, and the X-Acceleration zero count. For other attributes, i.e. the two velocitysign changes, Y-acceleration zero count and the segment count, manymore forged signatures had feature values closer to the correspondingreference signature mean.5.3 Capturing Shape DynamicallyMost static HSV of course is based on capturing the shape or some aspectsof the shape of the signature.Nagel and Rosenfeld (1977) study static dynamic shape capturing with acheque clearing application in mind. Signatures were input to the computerby scanning, digitizing and processing the images of real bank cheques. Anumber of features were then extracted; projection of the signature imageonto the coordinate axes obtained by summing the rows and columns of theimage; high and low letters were identi�ed since spellings were assumed tobe known (this is not always possible!); and feature values were measuredfor the signature as a whole and for the identi�ed letters.The authors discuss a model of handwriting and conclude that there arethree types of strokes viz. long strokes either going upwards or downwardse.g. letters b,d,f,h,k,l,t and f,g,j,p,q,y,z; very long strokes like those in f andj; and �nally short strokes in most letters other than f, l and t. The followingfeatures were used in their experiments: the ratio of signature width to short-stroke height, the ratio of signature width to long-stroke height, the ratios43

of the height of a long stroke to the height of the immediately precedingshort stroke and, �nally, the slope features for appropriate long letters. Thestudy clearly assumes that a signature is a name written down in Englishrather than just any signature.Using only a small number of signatures, it was found that the FAR waszero and FRR was 8% to 12%. Further investigation of this technique isneeded though. A major limitation of the proposed approach is that theanalysis assumes that a signature is a name written down which is oftentrue for people that have grown up in the USA. The technique is unlikelyto be e�ective for European signatures and signatures written in languagesother than English.Ammar, Yoshida and Fukumara (1986) discuss HSV and note that whenthe forgery looks very similar to the genuine signature most static veri�ca-tion techniques are ine�ective. In such situations one must capture thesignature as a gray image rather than a binary image and consider featureslike the following:� vertical position that corresponds to the peak frequency of the verticalprojection of the binary image.� vertical position that corresponds to the peak frequency of the verticalprojection of the high pressure image (an image that only containspressure regions above some threshold value).� the ratio of the high pressure regions to the signature area.� threshold value separating the high pressure regions from the rest ofthe image.� maximum gray level in the signature.� di�erence between the maximum and minimum gray levels in the sig-nature.� signature area in number of pixels.Using 200 genuine signatures from 20 individuals and 200 forgeries from10 forgers and digitizing each signature into a 256 X 1024 array of 256 graylevels, the procedure used involved �rst removing the noise by averaging andthresholding. The veri�cation procedure used the \leave-one-out" methodsince the number of genuine signatures was small, and used the remainingsignatures for building the reference signatures. The authors claim that theproposed technique resulted in 9% FRR and 7.5% FAR.44

Brault and Plamondon (1993b) describe an algorithm for segmentinghandwritten signatures with the aim of rebuilding the signature with a min-imal number of points and to produce segments as close as possible to thepsychomotor reality of their execution. The work is based on their work de-scribed earlier in which it is claimed that handwriting is made of curvilinearand angular strokes that are partially superimposed. For each point of thecurve, the algorithm tries to iteratively construct a vertex centered at thatpoint using several neighbouring points. A number of angles between theline joining the neighbouring points are calculated and those that have onlya small angle are considered part of the same vertex. The algorithm wasapplied to a set of signatures and the authors claim that the results weregenerally in agreement with human perception.Sabourin, Plamondon and Beumier (1994) present a technique involvingboth a local matching method and a global interpretation of the scene. Thelocal comparison method involves segmenting the signature image into acollection of arbitrarily shaped primitives and then template matching. Thesystem was evaluated using a database of 800 handwritten signature imagesfrom 20 writers (40 signatures each) written on white sheet 7 paper. Ofthe 800 signatures, 248 were used as a training database for the system.The details of this prototyping are not very clear since 28 signatures fromeight signers were used (it is not clear how these were selected) and foursignatures from seven writers were also used (again the selection criteriais unknown). This adds up to 252 (224 + 28) but the authors claim theyhad 248 signatures. The training signatures, it appears, are then used indeciding individual thresholds and system performance is then carried outusing 20 signatures as reference signatures for each writer. The remaining20 signatures are used as test signatures although some would have beenused in the training database. Very impressive results of a FRR of 1.5%and a FAR of 1.37% are obtained. Unfortunately such results do not reallyprovide much information about the real performance of this technique.Qi and Hunt (1994) discuss static HSV based on extracting global andlocal features of a signature image. The following global features were used:� height and width of a signature image,� width of the signature image with blank spaces between horizontalelements removed,� slant angle of the signature,� vertical centre-of-gravity of black pixels,45

� maximum horizontal projection,� area of black pixels, and� baseline shift of the signature image.The following local (or grid) features were included. These include thestructural information of image elements, for example� angle of a corner� curvature of an arc� intersection between the line strokes� number of pixels within each gridTo test the technique the authors collected 450 signatures from 25 people;15 were selected randomly to provide 20 genuine signatures over a courseof one month, 5 were selected randomly to provide one simple forgery foreach of the 15 subjects given only the printed name of the person, and 5provided skilled forgeries given samples of genuine signatures and allowed topractice, one for each of the 15 subjects. The signatures were scanned into acomputer. This was followed by considerable preprocessing. False rejectionrates varied from 3% to 11.3% while the FAR varied from 0% to 15%.Dimauro, Impedovo and Pirlo (1994) present the idea that a signatureconsists of a sequence of fundamental components delimited by abrupt in-terruptions which the authors claim occur in positions that are constant inthe signature of each individual, although the number of components in onesignature may be di�erent than in another signature of the same subject.These components are called fundamental strokes and the technique pre-sented carries out spectral analysis of the strokes. The authors describe atechnique in which two tables are built based on the reference signaturesgiving the components found in each reference signature and their sequence.Since the number of components in di�erent signatures of the same subjectcan be di�erent a clustering technique is used to �nd which componentsare there in a signature and a sequence of these components is built. Theveri�cation involves �nding the components of the test signature by usingclustering and then checking that the components appear in the sequencederived from the reference signatures. If the sequence does not �t, the testsignature is rejected otherwise the components are compared with those ofthe reference signatures. The technique is tested on signatures of twenty46

persons who provided 50 reference signatures and 40 test signatures each(90 signatures by each person) and forgeries from ten people who watchedall genuine signatures being signed and provided four forgeries for each ofthe twenty genuine signers (thus each forger provided 80 forgeries). Anoverall FRR of 1.7% and FAR of 1.2% was obtained. The paper notes thatFourier analysis of components was also used but details are not provided.The major problem with the evaluation is the use of 50 reference signatureswhich is quite unrealistic.Gupta and Joyce (1997b) describe a technique which uses dynamics tocapture signature shape. In the simplest form the technique may be ex-plained by using the following symbols:A for a peak of the x pro�leB for a valley of the x pro�leC for a peak of the y pro�leD for a valley of the y pro�leA signature may now be processed to identify all the peaks and the val-leys in the x and y pro�les (that is, x and y variation with time duringthe signing of the signature) and each of the pro�les may be representedby a string of symbols representing the peaks and valleys that are encoun-tered as the pro�les are scanned. The x pro�le is therefore represented by astring like ABABABAB::: and the y pro�le is represented by a string likeCDCDCDCDC::: with each peak and valley in the two pro�les having atime associated with it (the time is not shown in this very simple represen-tation). The peak and valley times may then be used to interleave the xand y pro�le representations so that the signature shape is represented bya string like ACBADBCAD which simply indicates that the signature's xand y pro�les together had only 9 peaks and valleys (typically there mightbe 50 or more) that were x peak followed by y peak, then an x valley, etc.Another way to look at this representation is to view it as a description ofthe pen motion i.e. from the initial position (x peak) the pen �rst movednorth-west to reach a y-peak then south-west to reach an x-valley and thenturned around and moved south-east to reach an x-peak and y-valley and soon. ACBADB therefore represents a pen motion something similar to theletter S written from the top to the bottom. The representation would bereversed if the letter S was written from the bottom to the top. The repre-sentation does not capture the curvature or the size of the curves that makethe letter S and therefore ACBADB is a representation for many curvesthat look somewhat similar and thus the representation provides consider-able exibility and tolerates considerable variation in the way the pen moves.47

The representation thus captures the shape as well as the direction of penmovement during signature writing. Given the exible representation, simi-lar representations should always be obtained for the signatures of a personin spite of minor variations in the genuine signatures.To compare two signature shapes, the symbolic representation for eachof the two signatures is found and the two strings are then compared anda distance between the strings is found. A number of sample signaturesare used as reference and the test signature is compared to each of themand either the mean distance or the smallest distance is found. The basis ofusing the smallest distance is that the sample signatures provide a collectionof signatures that show the habitual variations in a person's signature andthe test signature should be compared with the reference signature closestto it. The distance so computed is now compared with the threshold and ifsmaller the test signature is accepted.Performance evaluation of the technique shows that it captures the shapeof the signature well, resulting in a small FRR but high skilled FAR. Thistechnique may be combined with that of Gupta and Joyce (1997a) to builda two-stage scheme which results in a total error rate of about 15% whichis still a little high. Further work is continuing to improve the technique.6 ConclusionAs was touched on earlier, the state of the art in HSV makes it impossibleto draw de�nitive conclusions about which techniques are the best since:1. Performance of a HSV system that uses di�erent features for di�erentindividuals is better than a system that uses the same features for all.2. Performance of a HSV system that uses di�erent threshold values fordi�erent individuals is better than a system that uses the same thresh-old value for all.3. Performance of a HSV system that uses more signatures for buildingthe reference signature is better than a system that uses a smallernumber of signatures.4. FRR of a HSV system that uses more than one test signature to makea judgement about whether the subject is genuine or not is better thana system that uses only one signature.48

5. Performance of a HSV system that uses all the genuine signaturesincluding those that are used in performance evaluation in buildingthe reference signature is better than a system that does not use anytest signatures in building the reference signature.6. Performance of a HSV system that uses the genuine signatures as wellas some or all the forgeries that are used in performance evaluation inbuilding the reference signature or in deciding which features to chooseand/or what threshold to select is better than a system that does notuse any test signatures in building the reference signature.7. Performance of a HSV system that is tested on only a small databaseof test signatures that has signatures from only a small number ofsubjects is likely to be better than a system that uses a larger databasewhich has signatures from a larger number of subjects.8. Performance of a HSV system that is tested on a database of testsignatures that were screened to eliminate some subjects that hadproblem signatures is likely to be better than a system that has notcarried out any such screening.The survey seems to indicate that any technique using statistical featuresis unlikely to provide a total error rate (FAR + FRR) of less than 10% if areasonably large signature database is used. Most research work that claimsmuch better results have been found to have weaknesses in their performanceevaluation.Dynamic HSV is an active research area that is leading to new com-mercial products. The best techniques are likely to be based on using acombination of statistical features as well as the shape of the signatures.References[1] M. Ammar, Y. Yoshida and T. Fukumara (1986). A New E�ective Ap-proach for O�-line Veri�cation of Signatures by Using Pressure Features.IEEE Trans on Systems, Man and Cybernetics Vol SMC-16, No 3, pp39-47.[2] E. L. Asbo and H. Tichenor (1987). Method and Apparatus for DynamicSignature Veri�cation. US Patent Number 4, 646, 351, Feb 24.[3] Z. Bahri and B. V. K. V. Kumar (1988). Generalized Synthetic Discrim-inant Functions. Journal Opt. Soc. Am. A, vol. 5, No 4, pp 562-571.49

[4] L. Bechet (1990). Method of comparing a handwriting with a referencewriting. US Patent 4, 901, 358.[5] J. Brault and R. Plamondon (1984). Histogram Classi�er for Charac-terization of Handwritten Signature Dynamic. Proc of 7th InternationalConf on Pattern Recognition, Montreal, pp. 619-622.[6] J. Brault and R. Plamondon (1989). How to Detect Problematic Signersfor Automatic Signature Veri�cation. Int Carnahan Conf on SecurityTechnology, pp. 127-132.[7] J. Brault and R. Plamondon (1993a). A Complexity Measure of Hand-written Curves: Modeling of Dynamic Signature Forgery. IEEE Transon Systems, Man, and Cybernetics, Vol 23, No 2, pp 400-413.[8] J. Brault and R. Plamondon (1993b). Segmenting Handwritten Signa-tures at their Perceptually Important Points. IEEE Trans on PatternAnalysis and Machine Intelligence, Vol 15, No 9, pp 953-957.[9] P. de Bruyne and R. Forre (1985). Signature Veri�cation using HolisticMeasures. Comp Security, Vol 4, pp 309-315.[10] H. Chang, J. Wang, and H. Suen (1993). Dynamic Handwritten ChineseSignature Veri�cation. Proc Second Int Conf on Document Analysis andRecognition, pp. 258-261.[11] Computing Canada (1995). Vol. 21, No. 16, page 44(1).[12] H. D. Crane and J. S. Ostrem (1983). Automatic Signature Veri�cationusing a Three-axis Force-Sensitive Pen. IEEE Trans on Systems, Manand Cybernetics, Vol SMC-13, No 3, pp 329-337.[13] A. M. Darwish and G. A. Auda (1994). A New Composite FeatureVector for Arabic Handwritten Signature Veri�cation. Proc IEEE IntConf on Acoustics, V2, pp 613-666.[14] G. Dimauro, S. Impedovo and G. Pirlo (1994). Component-OrientedAlgorithms for Signature Veri�cation. Int Journal of Patt Rec and ArtInt, Vol 8, No 3, pp 771-793.[15] M. C. Fairhurst and P. Brittan (1994). An Evaluation of Parallel Strate-gies for Feature Vector Construction in Automatic Signature Veri�cationSystems, Int Journal of Patt Rec and Art Int, Vol 8, No 3, pp 661-678.50

[16] R. F. Farag and Y. T. Chien (1972). On-line Signature Veri�cation. ProcInt Conf on Online Interactive Computing, Brunel University, London,p 403.[17] P. M. Fitts (1954). The Information Capacity of the Human MotorSystem. In Controlling the Amplitude of Movement, J Exp Psych, Vol47, pp 381-391.[18] G. K. Gupta and R. C. Joyce (1997a). A Study of Some Pen Motion Fea-tures in Dynamic Handwritten Signature Veri�cation. Technical Report,Computer Science Dept, James Cook University of North Queensland.[19] G. K. Gupta, and R. C. Joyce (1997b). A Study of Shape in DynamicHandwritten Signature Veri�cation. Technical Report, Computer Sci-ence Dept, James Cook University of North Queensland.[20] W. Harrison (1958). Suspect Documents. Praeger Publishers, NewYork.[21] T. Hastie, E. Kishon, M. Clark and J. Fan (1991). A Model for SignatureVeri�cation. Proc IEEE Int Conf on Systems, Man, and Cybernetics,Charlottesville, pp. 191-196.[22] N. M. Herbst and C. N. Liu (1977). Automatic Signature Veri�cationBased on Accelerometry. IBM J Res Dev, pp 245-253.[23] O. Hilton (1956). Scienti�c Examination of Documents. Callaghan andCo, Chicago.[24] O. Hilton (1992). Signatures - Review and a New View. J of ForensicSciences, JFSCA, Vol 37, No 1, pp 125-129.[25] J. D. Jobson (1992). Applied Multivariate Data Analysis, Volume II:Categorical and Multivariate Methods. Springer-Verlag.[26] C. F. Lam and D. Kamins (1989). Signature Veri�cation Through Spec-tral Analysis, Pattern Recognition. Vol 22, No 1, pp 39-44.[27] F. Lamarche and R. Plamondon (1984). Segmentation and Feature Ex-traction of Handwritten Signature Patterns. Proc of 7th InternationalConf on Pattern Recognition, Vol 2, Montreal, pp. 756-759.[28] F. Leclerc and R. Plamondon (1994). Automatic Signature Veri�cation:The State of the Art - 1989-1993. Int Journal of Patt Rec and Art Int,Vol 8, No 3, pp 643-660. 51

[29] L. L. Lee (1992). On-line Systems for Human Signature Veri�cation.Ph.D. Thesis, Cornell University.[30] J. S. Lew (1983). An Improved Regional Correlation Algorithm for Sig-nature Veri�cation Which Permits Small Speed Changes Between Hand-writing Segments. IBM J Res and Dev, Vol 27, No 2, pp. 181-185.[31] C. N. Liu, N. M. Herbst and N. J. Anthony (1979). Automatic SignatureVeri�cation: System Description and Field Test Results. IEEE Trans onSystems, Man and Cybernetics, Vol SMC-9, No 1, pp 35-38.[32] G. Lorette (1984). On-line Handwritten Signature Recognition based onData Analysis and Clustering. Proc of 7th International Conf on PatternRecognition, Montreal, pp. 1284-1287.[33] A. J. Mauceri (1965). Feasibility Studies of Personal Identi�cation bySignature Veri�cation. Report No. SID 65 24 RADC TR 65 33, Space andInformation System Division, North American Aviation Co., Anaheim,Calif..[34] B. Miller (1994). Vital Signs of Identity. IEEE Spectrum, pp. 22-30.[35] D. A. Mighell, T. S. Wilkinson and J. W. Goodman (1989). Backprop-agation and its Application To Handwritten Signature Veri�cation. Advin Neural Inf Proc Systems 1, D. S. Touretzky (ed), Morgan KaufmanPub, pp 340-347.[36] N. Mohankrishnan, M. J. Paulik and M. Khalil (1993). On-Line Signa-ture Veri�cation using a Nonstationary Autoregressive Model Represen-tation. 1993 IEEE Int Sym on Circuits and Systems, pp 2303-2306.[37] R. N. Nagel and A. Rosenfeld (1977). Computer Detection of FreehandForgeries, IEEE Trans on Computers. Vol C-26, No 9, pp 895-905.[38] W. Nelson, W. Turin and T. Hastie (1994). Statistical Methods for On-line Signature Veri�cation. In International Journal of Pattern Recog-nition and Arti�cial Intelligence, Volume 8, Number 3. Singapore, pp.749-770.[39] W. Nelson and E. Kishon (1991). Use of Dynamic Features for SignatureVeri�cation. Proc IEEE Int Conf on Systems, Man, and Cybernetics,Charlottesville, pp. 201-205. 52

[40] A. S. Osborn (1929). Questioned Documents. Boyd Printing Co., Al-bany, NY, 2nd Edition.[41] M. Parizeau and R. Plamondon (1990). A comparative Analysis of Re-gional Correlation, Dynamic Time Warping, and Skeletal Tree Matchingfor Signature Veri�cation. Trans on Pattern Analysis and Machine In-telligence, Vol 12, No 7, pp 710-717.[42] J. R. Parks, D. R. Carr and P. F. Fox (1985). Apparatus for SignatureVeri�cation. US Patent Number 4, 495, 644.[43] D. A. Pender (1991). Neural Networks and Handwritten Signature Ver-i�cation. PhD Thesis, Department of Electrical Engineering, StanfordUniversity.[44] R. I. Phelps (1982). A Holistic Approach to Signature Veri�cation. InProceedings of the Sixth International Conference of Pattern Recogni-tion, page 1187, IEEE.[45] R. Plamondon and G. Lorette (1989). Automatic Signature Veri�cationand Writer Identi�cation - The State of the Art. Pattern Recognition,Vol 22, pp. 107-131.[46] R. Plamondon (1994). The Design of an On-line Signature Veri�cationSystem: From Theory to Practice. In International Journal of PatternRecognition and Arti�cial Intelligence, Volume 8, Number 3, Singapore.[47] R. Plamondon and M. Parizeau (1988). Signature Veri�cation from Po-sition, Velocity and Acceleration Signals: A comparative Study, Proc of9th International Conf on Pattern Recognition, Vol 1, Rome, Italy, pp260-265.[48] R. Plamondon (1993). Looking at Handwriting Generation from a Ve-locity Control Perspective. Acta Psychologica, Vol 82, pp 89-101.[49] R. Plamondon, A. M. Alimi, P. Yergeau and F. Leclerc (1993). Mod-elling Velocity Pro�les of Rapid Movements: A Comparative Study. Bi-ological Cybernetics, Vol 69, pp 119-128.[50] Y. Qi and B. R. Hunt (1994). Signature Veri�cation using Global andGrid Features. Pattern Recognition, Vol 27, No 12, pp 1621-1629.[51] R. Sabourin, R. Plamondon and L. Beumier (1994). Structural Inter-pretation of Handwritten Signature Images. In International Journal of53

Pattern Recognition and Arti�cial Intelligence, Volume 8, Number 3,Singapore, pp. 709-748.[52] R. L. Sherman (1992). Biometric Futures. Computers & Security, Vol11, pp 128-133.[53] J. Sternberg (1975). Automated Signature Veri�cation using Handwrit-ing Pressure. 1975 WESCON Technical Papers, No 31/4, Los Angeles.[54] T. S. Wilkinson (1990). Novel Techniques for Handwritten SignatureVeri�cation. PhD Thesis, Department of Electrical Engineering, Stan-ford University.[55] M. Yasuhara and M Oka (1977). Signature Veri�cation ExperimentBased on Nonlinear Time Alignment: A Feasibility Study. IEEE Transon Systems, Man and Cybernetics, Vol SMC-7, No 3, pp 212-216.[56] K. P. Zimmermann and M. J. Varady (1985). Handwriter Identi�cationfrom One-bit Quantized Pressure Patterns. Pattern Recognition, Vol 18,No 1, pp 63-72.

54