+ All Categories
Home > Documents > Application of Image Processing in Biometric Verification.pdf (1) (2)

Application of Image Processing in Biometric Verification.pdf (1) (2)

Date post: 08-Oct-2015
Category:
Upload: pradyumn-paliwal
View: 16 times
Download: 0 times
Share this document with a friend
Description:
Application of Image Processing in Biometric Verification.pdf (1) (2)
Popular Tags:

of 21

Transcript
  • 1

    Department of Electronics and Communication Engineering

    IIT Roorkee.

    Seminar report on

    Application of image processing in

    Biometric Verification

    Submitted by,

    Pradyumna Paliwal

    Enrolment number: 10213012

    Course: IDD ECW (5th year)

    Supervisor :

    Dr.Debashis Ghosh,

    Associate Professor,

    Department of Electronics & Communication Engineering, IIT Roorkee

  • 2

    Abstract

    Biometrics is a pattern recognition problem for automatic

    recognition or verification of an individuals identity based on a

    feature vector(s) derived from their physiological and/or behavioural

    characteristic. Biometric systems should provide a reliable personal

    recognition/verification which cannot be fooled by fraudulent

    impostors. Handwritten signature is a common biometric used to

    authenticate financial transactions including cheques, credit card

    validation etc. or to sanction the contents of a document like

    certificates, contracts etc... As a biometric system relies on what you

    are or what you do rather than what you possess (a Personal

    Identification Number or a Password) to establish the identity of the

    individual, it is much more robust than other traditional

    recognition/verification schemes. In this report, a brief overview of

    biometric methods will be presented. Major focus will be on

    signature verification.

  • 3

    Contents

    1. Introduction ........................................................................... 4

    2. Biometric Systems .......................................................................... 5

    3. Signature Verification System ..................................... 9

    4. Features in signature verification ............................................. 11

    5. Signature Feature Matching ............................................................. 13

    6. Conclusion ................................................................................... 19

    7. References ....................................................................... 20

  • 4

    1. Introduction

    With the proliferation of information and communication technologies, the human-

    machine interaction has become a matter of routine in modern day to day life. We are

    interacting with machines at work place, at leisure, during travel, at home or any other

    place. All our day to day transactions are being conducted through various

    interconnected electronic devices. Many of these transactions require authenticated

    access for security reasons. In order to protect all these transactions from various

    fraudulent practices it is very important to come up with a method so that the machine

    we interact with to complete the transaction can establish the identity of individuals.

    Traditional approaches rely on what you possess like Personal Identification Number

    or ID card but approaches like these are not sufficiently reliable to satisfy the security

    requirements of electronic transactions because they lack the capability to differentiate

    between a genuine individual and impostor who fraudulently acquires the access

    privilege. Biometrics, which refers to the identity of the individual based on his/her

    physiological or behavioural characteristics, relies on something which you are or

    what you do to make personal identification and, therefore, inherently has the

    capability to differentiate between a genuine individual and a fraudulent impostor [1].

    A Biometric system depends on pattern recognition/classification techniques to

    establish the identity of an individual. Pattern Recognition techniques assign a

    physical object or an event to one of the pre-specified categories. The pattern

    recognition problem is difficult because various sources of noise distort the patterns,

    and often within a class there is substantial variability in patterns.

    Every Pattern recognition based Biometric system has two major objectives:

    i. To select appropriate features from raw biometric data. Feature extraction can be stated as the problem of extracting from the raw data, the information which

    is most relevant for classification purposes, in the sense of minimizing the

    within-class pattern variability while enhancing the between class pattern

    variability [2].

    ii. To develop a decision-making approach that uses the feature vector extracted from data to classify data accurately. A perfect classification performance is

    often impossible, a more general task is to determine the probability for each of

    the possible categories.

    For a biometric system to be practical and to be used in commercial applications it

    should have high recognition accuracy, speed, and resource requirements should me

    minimal, be harmless to the users, be accepted by the intended population, and be

    sufficiently robust to various fraudulent methods and attacks to the system.

  • 5

    2. Biometric Systems

    A biometric system which is essentially a pattern recognition system uses the

    physiological or behavioural characteristics of an individual to recognize the person.

    A feature vector is extracted from the biometric data collected from the individual

    being enrolled in the system and is stored in the system database as a template for

    future queries. Depending on the application context, a biometric system may operate

    either in verification mode or identification mode. While identification involves

    comparing the acquired biometric information against templates corresponding to all

    users in the database, verification involves comparison with only those templates

    corresponding to the claimed identity. This implies that identification and verification

    are two problems that should be dealt with separately.

    In the verification problem we consider two categories or classes w1 and w2, where w1

    indicates that the claim is true (a genuine user) and w2 indicates that the claim is false

    (an impostor). Each pattern is described by a feature vector X Rd. An input feature vector XQ is extracted from biometric data of the individual being tested and the

    individual specifies the claimed identity I. Now the verification problem is to

    determine if (I, XQ) belongs to class w1 or w2. Typically, XQ is matched against, the

    biometric template corresponding to user I, to determine its category. Thus

    (I, XQ) = w1, if S (XQ, XI) t = w2, otherwise

    Where S is the function that measures the similarity between feature vectors XQ and

    XI, and t is a predefined threshold. The value S (XQ, XI) is termed as a similarity or

    matching score between the biometric measurements of the user and the claimed

    identity. Therefore, every claimed identity is classified into w1 or w2 based on the

    variables XQ, I, XI and t and the function S. Note that biometric measurements (e.g.,

    signatures) of the same individual taken at different times are almost never identical.

    This is the reason for introducing the threshold t.

    Some commonly used similarity metrics in pattern recognition are correlation,

    Euclidean distance, Mahalanobis distance, Hausdorff metric etc.

    In the identification problem, there are N users enrolled in the system. We consider

    N+1 categories Ck, k= {1, 2 N, N+1}. C1, C2 CN are the identities enrolled in the system and CN+1 represents the rejected case. Now given an input feature vector XQ,

    we determine the class Ck, k= {1, 2 N, N+1}. We measure the similarity metric between XQ and feature vector corresponding to each person enrolled in the system. If

    none of the measured similarity metric is greater than the threshold then the

    questioned individual is rejected on the other hand if there are multiple similarity

    metric greater than the threshold then the one with the greatest similarity metric is

    chosen to be the class of questioned individual. Hence

  • 6

    XQ = Ck if max {S (XQ, XCk)} t, k = 1, 2 N = CN+1 otherwise

    Where XIk is the biometric template corresponding to identity, and t is a predefined

    threshold.

    Two samples of the same biometric characteristic from the same person (e.g., two

    signatures of the same person) are not exactly the same, changes in the users physiological or behavioural characteristics , ambient conditions (e.g., temperature

    and humidity), and users interaction with the sensor. Therefore, the decision problem has to be posed in probabilistic terms. We assume that there is some a priori

    probability P (w1) that the individual belongs to class w1 and some a priori probability

    P (w2) that the individual belongs to class w2. If w1 and w2 are the only classes then

    the sum of P (w1) and P (w2) is one. These prior probabilities reflect our prior

    knowledge of likely we are to get individuals belonging to each class. Suppose we

    have to make decision about the class of the individual assuming the same cost or

    consequence on making an error for both the classes then the only information that we

    are allowed to use is the value of the priori probabilities. The decision rule in this case

    will be: Decide w1 if P (w1) > P (w2) and w2 if P (w2) > P (w1).

    The response of a biometric matching system is the matching scores (typically a single

    number) that quantifies the similarity between the input (XQ) and the template (XI)

    representations. Suppose that we know both the prior probabilities P (wj) and the

    conditional probability densities p (s | wj) for j=1, 2. Now the joint probability density

    that of finding a pattern that is in category wj and has the matching score s can be

    written as p (wj, s) = P (wj | s)p(s). Rearranging these terms leads to Bayes formula:

    Bayes formula shows that by observing the value of s we can convert the prior

    probability P (wj) to the a posteriori probability P (wj | s), the probability of that the

    individual belongs to class wj given that similarity measured is s.

    For a given s we can minimize the probability of error by deciding w1 if P (w1|x) and

    w2 otherwise. This is called Bayes Decision Rule [3].

    The Biometric system decision is regulated by the threshold t. The threshold is chosen

    such that it satisfies the Bayes Decision Rule. The pairs of biometric samples

    generating scores higher than or equal to t are inferred as mate pairs (i.e., belonging to

    the same person); pairs of biometric samples generating scores lower than t are

    inferred as non-mate pairs (i.e., belonging to different persons). The distribution of

    scores generated from pairs of samples from the same person is called the genuine

    distribution and from different persons is called the impostor distribution.

  • 7

    The probabilities associated with the four possible outcomes of the system:

    i. P (s>t | XQ w1): a hit, the probability that the similarity measure is greater than the threshold and the individual is a genuine user.

    ii. P (s>t | XQ w2): a miss, the probability that the similarity measure is greater than the threshold but the individual is not a genuine user.

    iii. P (s

  • 8

    The decision rule is as follows. If the matching score is less than the system threshold,

    then decide D0, else decide. The above terminology is borrowed from communication

    theory, where the goal is to detect a message in the presence of noise. H0 is the

    hypothesis that the received signal is noise alone; and H1 is the hypothesis that the

    received signal is message plus the noise. Such a hypothesis testing formulation

    inherently contains two types of errors [1].

    Type I: false match (D1 is decided when H0 is true);

    Type II: false non-match (D0 is decided when H1 is true).

    FMR is the probability of type-I error (also called significance level in hypothesis

    testing) and FNMR is the probability of type-II error as

    FMR = P (D1|H0)

    FNMR = P (D0|H1)

    Figure 1. Biometric system error rates. (a) FMR and FNMR for a given threshold t are displayed over the genuine and impostor score distributions; FMR is the percentage of non-match pairs whose matching scores are greater than or equal to t, and FNMR is the percentage of mate pairs whose matching scores are less than t. (b) Choosing different operating points results in different FMR and FNMR. The curve relating FMR to FNMR at different thresholds is referred to as receiver operating characteristics (ROC). Typical operating points of different biometric applications are displayed on an ROC curve. Lack of understanding of the error rates is a primary source of confusion in assessing system accuracy in vendor/user communities a like [1].

  • 9

    3. Signature verification system

    Handwritten signature verification has been extensively studied over past few years.

    Handwritten signatures are very commonly used to authenticate financial transactions

    including cheques, credit card validation etc. or to sanction the contents of a document

    like certificates, contracts etc. Signature verification is normally done by visual

    inspection. A person compares the appearance of the signature presented to him with a

    template signature stored in his database and accepts the signature if he finds them to

    be sufficiently similar. This process requires a lot of time and effort to manually verify

    the signatures. Due to the time and effort required, no verification is done at all in

    majority of the situations. If computers can be made intelligent enough to understand

    human handwritings it will be possible to make man-computer interfaces more

    ergonomic and attractive and thus will solve our problem.

    The Handwritten signature is a behavioural biometric attribute. Using Signature as a

    biometric for authentication has several advantages compared with other biometrics

    like fingerprint, face, voice etc. These other biometrics requires relatively expensive

    hardware to capture the image on the other hand capturing a signature does not require

    much expensive hardware. An important advantage of signature as a biometric is that

    it is already socially accepted and has been used in civilian applications for decades

    while other methods like fingerprints still have stigma of being associated with

    criminal investigation. However we do not know that every individual has a unique

    signature but still it is generally accepted. Signature verification can be applied only

    when a person is conscious and knowingly provides his signature while other

    biometrics like fingerprint can be acquired fraudulently. In spite of all these

    advantages signature verification is very difficult pattern recognition problem. The

    variation in the signatures of the same person can be very large; this is called intra-

    class variation. In addition, a persons signature often changes a lot during their life time. This is not the case with other biometrics which does not have much intra class

    variation.

    On the basis of the data acquisition method there are two major methods of signature

    verification: offline method and online method.

    3.1 On-line signature verification system

    On-line verification refers to the methods and techniques dealing with automatic verification of a signature as it is written using a digitizer or an instrumented stylus

    that captures information about the pen tip, generally its position, velocity, or

    acceleration as a function of time [4].

    On-line signatures are considered more robust and reliable as compared to off-line

    signature verification system because they take into account the dynamic features like

    pressure and velocity of pen tip in addition to spatial (derived from x, y coordinates)

    features. Dynamic features are more complex and difficult for an imposter to imitate.

    A typical on-line signature verification system is made up of following modules: data

  • 10

    acquisition, pre-processing, feature extraction, training and verification [5] as shown

    in figure [2].

    Figure 2. A typical on-line signature verification system [5].

    3.2 Off-line signature verification system

    The off-line method uses an optical scanner to obtain signature written on paper.

    Signature verification in off-line systems is more difficult than in on-line systems

    because a lot of dynamic information is lost. Hence, on-line signature verification is

    generally more successful. Nevertheless, off-line systems have a significant advantage

    in that they do not require access to special processing devices when the signatures are

    produced. In fact, if the accuracy of verification could be promoted greatly, the off-

    line method has much more practical application areas than that of the on-line one.

    The process of off-line signature verification often consists of a learning stage and a

    testing stage. The purpose of the former is to create a reference file, and that of the

    latter is to compute the similarities between the testing and its corresponding reference

    signature to check whether the tested signature is genuine [6].

  • 11

    4. Features in signature verification

    Selection of appropriate features that minimize the intra-class variation and maximize

    the inter-class variation is the one of most important part of all pattern recognition

    systems. Different feature extraction methods fulfil this requirement of extracting the

    most pertinent information to a variable degree, depending on the pattern recognition

    problem and available data. A feature extraction method that is useful in a particular

    application might not be useful in other applications.

    The extracted features must be invariant to various distortions and variations expected

    in the signature of the same person. Also according to the curse of dimensionality,

    with a limited training set the dimensionality of features should be kept low in order to

    get good generalization performance. A rule of thumb is to use five to ten times as

    many training samples of each class as the dimensionality of feature vector [7].

    In order to recognize many variations of the signature of the same person, features that

    are invariant to certain transformations on the signature need to be used. Invariant

    features are those which have approximately the same values for samples of the

    signature of the same person that have been translated, scaled, rotated, stretched,

    skewed etc.

    For some features extraction methods, the signature can be reconstructed from the

    extracted feature. This property is called re-constructability [7]. Re-constructability

    ensures that complete information about the signature shape is present in the extracted

    feature.

    4.1 Geometric Features

    Geometric features based on polar coordinates have been proposed in [8]. In this

    method first a few equidistant point are chosen on the envelope of the signature, then

    three set of features are extracted from the points on the envelope of the signature:

    radius, angle and the number of black pixels of the signature strokes the radius crosses

    when sweeping from one contour sample to the next as shown in figure 3. This

    method of feature extraction is rotation invariant. Also if we use the ratio of adjacent

    samples on the contour rather than their absolute value then this method is also scale

    invariant.

    4.2 Shape descriptors

    A signature can be considered as a symbol or pattern of shapes, therefore shape

    descriptors can be used to extract meaningful features from the signature pattern.

    There are many shape representations and retrieval methods. Shape retrieval involves

    three primary issues: shape representation, shape similarity measure and shape

    indexing [9]. The shape description methods can be classified into two categories:

    region based versus contour based.

  • 12

    In region based shape description techniques, all the points within the shape are

    considered to obtain the shape description or representation. These techniques

    generally use moment descriptors to describe shape. Moments and functions of

    moments have been utilized in a number of applications to achieve invariant

    recognition of two dimensional image patterns [10]. The most common moments used

    are geometric moments, Zernike moments, Legendre moments etc. Region moment

    representation of the shape interprets a normalized grey level image function as a

    probability density of a 2D random variable. Moments capture the global information

    such as overall orientation, elongation etc. missing from many pure contour based

    representations. Just like Fourier series the first few terms of the moment capture the

    more general shape properties while the higher terms capture the finer detail.

    Contour based shape description techniques use only the boundary data of the shape.

    Contour based methods can be classified as: global descriptors, shape signatures, and

    spectral descriptors [9]. Global descriptors such as area, circularity, eccentricity, axis

    orientation can only be used to discriminate shapes with large dissimilarities; therefore

    it is suitable only for filtering process. Shape signatures are essentially local

    representations of shape features. Common shape signatures are complex coordinates,

    curvature and angular representations etc., they are very sensitive to noise and are not

    robust. They also require large computation during similarity measure. Therefore

    these local representations require further processing using spectral transforms such as

    Fourier transform and wavelet transform. Spectral descriptors such as Fourier

    descriptor and wavelet descriptor are usually derived from shape signatures by taking

    their Fourier transform and wavelet transform respectively. In Fourier descriptor the

    first few low frequency terms capture the coarse information of the contour while the

    higher frequency terms capture the finer detail of the contour. The advantage of

    wavelet descriptor over Fourier descriptor is that wavelet descriptor achieve

    localization in both space and frequency domain simultaneously. However, the

    wavelet transform require intensive calculation in the matching stage.

    . Figure 3. The signature and its envelope with the values of area, angle and number of black pixels associated with each sample on the contour [8].

  • 13

    5. Signature Feature Matching

    After the extraction of meaningful features from the signature sample, a method is to

    chosen to compare the features of the test signature and template signature. A number

    of techniques and their variations have been applied to implement a signature

    verification system resilient to forgery by fraudulent impostors. Some of these

    techniques are Template matching [11], Bayesian Learning [12], Hidden Markov

    Model (HMM) [13], Dynamic Time Warping (DTW) [15], Graph Matching [16],

    combination of classifiers [17] etc. In this section all these techniques are discussed.

    5.1 Template matching approach Template matching approach is one of the simplest and earliest approaches to pattern

    recognition. Matching is a generic operation in pattern recognition, which is used to

    determine the similarity between two entities. It has been shown in [11] that a pattern

    matching method is able to achieve a good verification performance for Japanese

    signature. However, the two instances of a signature vary in stroke widths. The

    similarity between two signatures obtained by a pattern matching method is affected

    by their stroke widths. The stroke widths vary with the pen used for signing. To solve

    this problem, it is effective to first normalize the width of signature before matching.

    Based on these considerations a new pattern matching method was proposed for

    Japanese signature verification in [11]. In this modified pattern matching method, the

    strokes of signatures are first thinned at one pixel width, and then thinned signatures

    are blurred by a fixed point-spread function. Successively the similarity between

    registered and examined signatures is calculated. The average error rate of method is

    9.1%, while the average rate of the conventional method is 19.2%.

    5.2 Bayesian Learning Approach

    When multiple signature samples of a person are available it is logical to learn

    collectively from these signature samples specific to the writer. This kind of learning

    focuses on the specific writer and answers whether the anonymous person who

    presents his signature is the same person whose signature model has learned. In this

    approach first the genuine samples of a specific writer are compared using a similarity

    measure and distribution over distances between features of samples is obtained. This

    distribution represents the distribution of similarity measure for genuine samples of a

    specific writer. When a signature is presented for test the similarity measure is

    calculated by comparing the test signature with every genuine signature in the

    database of the specific writer and a distribution of distance between features of this

    questioned signature and genuine signatures is obtained. Now once the two

    distributions namely within writer distribution and the questioned vs genuine

    distribution are obtained, the next task is to obtain the probability of similarity

    between these two distributions and thus infer whether the questioned signature is fake

    or genuine. There are various methods to compare the two distributions such as

  • 14

    Kolmogorov-Smirnov test, Kullback-Leibler divergence, Jensen-Shannon test and

    Bayesian Approach. Among these approaches the Bayesian Approach is best [12].

    Mathematically the task of finding the probability that two distributions are similar

    can be stated as follows.

    Let F be a set of probability distributions. D1 is the probability distribution of

    similarity between features of two genuine signatures and D2 is the probability

    distribution of the similarity between the features of the questioned signature and the

    genuine signatures. S1 is the multiset of n random samples generated from D1 and S2

    is the multiset of m random samples generated from D2. Now we have to find the

    probability that D1=D2 given S1 and S2, i.e.

    PF=P (D1=D2 | S1, S2)

    As shown in [12] using Bayesian inference method the above probability is can be

    written as

    Where Q(S) stands for marginalized joint probability of the multiset S under the

    family F.

    5.3 Hidden Markov Model approach

    Because the signatures of the same person vary in height and width signal warping

    techniques are commonly used in matching signatures. Due to the warping problem in

    signature verification the use of HMM is becoming popular. HMMs are very popular

    tool used for modelling time varying dynamic patterns like speech [13].

    HMMs are an extension of the concept of Markov model to include the case where the

    observation is a probabilistic function of the state i.e., the resulting model (which is called a hidden Markov model) is a doubly embedded stochastic process with an

    underlying stochastic process that is not observable (it is hidden), but can only be

    observed through another set of stochastic processes that produce the sequence of

    observations.

  • 15

    An HMM is characterized as follows:

    i. N, the number of states in the model. The states are hidden but still in many practical applications there is some significance attached with the states. States

    are interconnected to one another. The individual states are denoted as S = {S1,

    S2,.,SN}. ii. M, the number of observable outcomes possible per state. The set observable

    outcomes form an alphabet. These observable outcomes correspond to the

    physical output of the system being modelled. The outcomes are denoted as V

    = {v1, v2,.,vM}. iii. A={aij}, the state transition probabilities where

    aij = P (qt+1 = Sj | qt = Sj), 1 i, j N

    iv. B={bj(k)}, the observation symbol probability distribution in state j where

    bj(k) = P(vk at t| qt = Sj), 1 j N, 1 k M

    v. ={j}, the initial state distribution where

    j = P (q1 = Sj), 1 i N

    For the HMM model to be useful in real life applications there are three problems that

    have to be solved [16].

    Problem 1: Given the output sequence O = O1O2..OT, and a HMM model = (A, B, ), we have to efficiently compute P (O| ), the probability of observing the sequence O, given the model .

    Problem 2: Given the observation sequence O = O1O2..OT and a HMM model , we have to find a corresponding state sequence Q = q1q2.qT which is most probable.

    Problem 3: How to find a HMM model that maximizes P (O| ).

    Problem 1 is solved using forward backward algorithm which uses recursion to

    calculate P (O| ) for partial sequences starting with sequence of length one up to length T. Problem 2 is solved using Viterbi algorithm using dynamic programming.

    Problem 3 is solved iteratively using Baum Welch algorithm. All these algorithms are

    described in [14].

  • 16

    5.4 Dynamic Time Warping approach The feature vectors obtained from two signatures cannot be compared directly as they

    may be of different lengths. A technique called dynamic time warping (DTW) is used

    to deal with this problem. Dynamic time warping algorithm is based on dynamic

    programming and finds an optimal match between two sequences by stretching or

    compressing two sequences.

    In this approach for the signature verification problem first the 1-D vertical projection

    is extracted from the signature. The 1-D vertical projection serves as the feature vector

    for signature. Then this extracted vertical projection is matched with the vertical

    projection of the template signature by computing a measure of dissimilarity between

    the two projections. A non-linear matching is used due to the reasons mentioned

    earlier.

    The non-linear matching between the two templates is done on a rectangular grid as

    shown in figure 4. The two templates are aligned along the x-axis and the y-axis,

    respectively as shown in figure 4. The intersections on the grid are defined as nodes.

    Each node (i, j) represents a match of the ith component of the vertical projection

    extracted from the probe signature with the jth component of the vertical projection

    extracted from the reference signature. There is a cost matrix which stores the the cost

    associated with each node (i, j). Cost is a measure of dissimilarity between the ith and

    jth components of probe and reference signature vertical projections respectively (if

    the two components are highly dissimilar then the cost will be high).

    The cost at dummy node (0, 0) is defined as zero. All paths start from this dummy node. Now the goal is to find a path such that that sum of cost associated with the all

    the nodes that the path passes through is minimum. To solve this problem dynamic

    programming is used. In dynamic programming we incrementally find the path of

    minimum cost. We start with a path of single node starting at the dummy node. Then

    in the next step we find paths of minimum cost ending at the neighbours of the

    dummy node. Thus each time we increase the path with single node until we reach the

    final point on the grid.

  • 17

    Figure 4: The warping grid with the reference template and the probe template aligned along the y-axis and x-axis, respectively. The least cost path has been plotted. The signatures from which feature templates have been extracted are also shown [15].

    5.5 Graph Matching based approach This method depends only on the raw binary pixel intensities. This methods considers signature verification problem as a graph matching problem. This method is invariant to rotation, translation and scaling. In the pre-processing steps of this method the binary image of signature is captured. Then, pepper noise is removed. The angle of least second moment of signature is found and the signature is rotated by this angle. The image is then smoothed and thinned. The thinned image is then normalized that preserves the aspect ratio of the signature. Now the thinned-normalized image is ready to be matched. S1 and S2 are two offline signature images to be compared. Let X and Y are the sets of vertices (pixels) that represent S1 and S2, respectively after thinning and normalization. In this approach we construct a complete bipartite graph G = (V, E) = Km,n , from X and Y where V = X Y , |X| = m, and |Y | = n. Since each vertex in X can be connected to any of the vertex in Y graph G is complete and assuming that the signatures are ordered such that |X| |Y | a complete matching of X into Y exists. There are many possible complete matchings. The goal is to find the minimum cost complete matching of X into Y. This is some form of the well-known Assignment

  • 18

    Problem (AP) from graph theory. The Hungarian Method is used to solve this assignment problem; i.e., find how much the signatures S1 and S2 match [16]. In Hungarian method to solve the assignment problem a m x n cost matrix is found. The rows correspond to the vertices of X and column corresponds to the vertices of Y. Every vertex in X and Y has its corresponding coordinates x and y in the signature image. These coordinates are used to calculate the entry (which is equal to the Euclidean distance) in cost matrix corresponding to these vertices. After calculating all entries of cost matrix, the assignment problem is solved. The cost of the resultant solution equals the sum of all entries that correspond to the minimum cost solution.

    5.6 Fusion of Multiple Classifiers In this approach a weighted combination of multiple classifiers is used for offline signature verification. Initially, various features are extracted from signature image. These features are passed through multiple classifiers and the result of these classifiers is finally fused to obtain the final result. Assume that there are R classifiers each representing the given signature pattern by a distinct feature vector xi. In the feature space each class wk is modelled by the probability density function p (xi | wk) and its a priori probability density is denoted P (wk). Now, according to the Bayesian theory, given feature vectors xi, i=1,.....,R, and their pattern Z, should be assigned to class wj that has maximum a posterior probability P (wj | x1,x2,...xR) [17]. Ensembles of classifiers (EoCs) have been used to reduce error rates of many challenging pattern recognition problems including signature verification. The main idea behind using EoCs is that different classifiers usually make different errors on different samples and thus using a well-chosen ensemble of classifiers reduces the probability of error. Given a pool of classifiers, an important issue is the selection of a diversified subset of classifiers to form an EoC, such that the recognition rates are maximized during operations. This task may be performed either statically or dynamically. Given a set of reference samples (generally not used to train the classifiers), a static selection approach selects the EoC that provides the best classification rates on that set. Then, this EoC is used during operations to classify any input sample. Dynamic selection also needs a reference set to select the best EoC; however, this task is performed on-line, by taking into account the specific characteristics of a given sample to be classified [18].

  • 19

    6. Conclusion

    This report presents the basic idea of a Biometric system and discusses in some detail

    the idea of a verification system using handwritten signature as a biometric trait. A

    brief description of some of the employed approaches for offline signature verification

    problem is given and their major merits and demerits are also listed. It is obvious that

    the problem of signature verification becomes more difficult when passing from

    random to skilled forgeries, the latter being so difficult a task that even human beings

    make errors in several cases. The task is even more difficult for offline signature

    verification due to absence of all the dynamic information. Concluding the discussion

    it may be said that although much work has been done in the area of online signature

    verification but the area of offline signature verification is far from maturity and much

    work is needed in the same before it can be employed for some commercial purpose

    like a automatic cheque verification system in a bank.

  • 20

    7. References

    [1] Anil K. Jain, Arun Ross and Salil Prabhakar, An Introduction to Biometric Recognition, IEEE Transactions on circuits and systems for video technology, vol. 14, no. 1, pp.4-20, January 2004.

    [2] P.A. Devijer and J. Kittler, Pattern Recognition: A statistical approach, London:

    Prentice-Hall, 1982.

    [3] Peter E. Hart, David G. Stork, and Richard O. Duda, Pattern Classification, 2nd

    Edition, Wiley, New York, 2000.

    [4] Rjean Plamondon and Sargur N.Srihari, On-line and Off-line Handwriting Recognition: A Comprehensive Survey, IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no.1, pp.63-84, January 2000.

    [5] Zhaoxiang Zhang, Kaiyue Wang, Yunhong Wang, A Survey of On-line Signature Verification, Biometric Recognition, Lecture Notes in Computer Science, Springer-Verlag Berlin Heidelberg, Volume 7098, pp 141-149, December 2011.

    [6] Weiping Hou, Xiufen Ye and Kejun Wang, A Survey of Off-line Signature Verification, Proceedings of the 2004 International Conference on Intelligent Mechatronics and Automation, Chengdu, China , pp. 536-541, August 2004.

    [7] Oivind Due Trier, Anil K. Jain, Torfinn Taxt, Feature extraction method for character recognition: A survey, Pattern Recongnition, Elsevier, vol.29, no.4, pp. 641-662, April 1996.

    [8] Miguel A. Ferrer, Jesus B. Alonso, and Carlos M. Travieso, Offline Geometric Parameters for Automatic Signature Verification Using Fixed-Point Arithmetic, IEEE Transactions on pattern analysis and machine intelligence, vol. 27, no. 6, June

    2005.

    [9] D.S. Zhang, G.J. Lu, A comparative study on shape retrieval using Fourier descriptors with different shape signatures, Proceedings of the International

    Conference on Multimedia and Distance Education, Fargo, ND, USA, pp. 19, June 2001.

    [10] Cho-Huak Teh and Roland T. Chin, On image analysis by the methods of moments, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.10, no.4, pp.496-513, July 1988.

    [11] Yoshimara, M. Yoshimura and T. Tsukamoto, Investigation of an Automatic Verification System for Japanese Counter signatures on Travellers Cheques, Proceedings of the 7th IGS Conference, pp. 86-87, August 1995.

  • 21

    [12] Sargur N. Srihari et.al., Signature Verification Using a Bayesian Approach, Computational Forensics, Lecture Notes in Computer Science, Springer-Verlag Berlin

    Heidelberg, vol. 5158, pp. 192-203, 2008.

    [13] G. Rigoll, A. Kosmala, A Systematic Comparison Between On-Line and Off-

    Line Methods for Signature Verification with Hidden Markov Models, IEEE

    Proceedings Fourteenth International Conference On Pattern Recognition, vol.2, pp.

    1755-1757, August 1998.

    [14] Lawrence R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Proceedings of the IEEE, vol.77, no. 2, pp. 257-286, February 1989.

    [15] A. Piyush Shanker and A.N. Rajagopalan, Off-line signature verification using DTW, Pattern Recognition Letters, vol.28, no.12, pp. 1407-1414 ,September 2007.

    [16] Ibrahim S. I. Abuhaiba, Offline Signature Verification Using Graph Matching, Turkish Journal of Electrical Engineering & Computer Sciences, vol.15, no.1, 2007.

    [17] J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, On combining classifiers, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, pp. 226239, March 1998.

    [18] L. Batista et al., Dynamic selection of generativediscriminative ensembles for off-line signature verification, Pattern Recognition, Elsevier, vol.45, no.4, April 2012.


Recommended