A HIERARCHICAL FINGERPRINT MATCHINGSYSTEM
A thesis submitted in partial fulfillment of the
requirements for the degree of
Bachelor-Master of Technology (Dual)
by
ABHISHEK RAWAT
to the
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
July 2009
Abstract
Fingerprint Recognition is a widely popular but a complex pattern recognition prob-
lem. It is difficult to design accurate algorithms capable of extracting salient features
and matching them in a robust way. The real challenge is matching fingerprints af-
fected by: i) High displacement/or rotation which results in smaller overlap between
template and query fingerprints, ii) Non-linear distortion caused by the finger plastic-
ity, iii) Different pressure and skin condition and iv) Feature extraction errors which
may result in spurious or missing features. The information contained in a finger-
print can be categorized into three different levels, namely, Level 1 (pattern), Level
2 (minutia points), and Level 3 (pores and ridge contours). Despite their discrimina-
tive power, the Level 3 features are barely used by the vast majority of contemporary
automated fingerprint authentication systems (AFAS) which rely mostly on minutiae
features. This is mainly because, most of these authentication systems are equipped
with 500 ppi (FBI’s standard of fingerprint resolution for AFAS) scanners, and reli-
ably extracting “fine and detailed” Level 3 features require high resolution images.
While this may have been the case with many older live-scan devices, the current
devices are capable of detecting a reasonable amount of level 3 details even at the rel-
atively limited 500 ppi resolution. In this thesis the above mentioned problems have
been addressed and a new hierarchical matcher has been proposed. The hierarchical
matcher utilizes Level 3 features (pores and ridge contour) in conjunction with Level
2 features (minutiae) for matching. The aim is to reduce the error rates, namely FAR
(False Acceptance Rate) and FRR (False Rejection Rate) in the existing minutiae
based systems. The hierarchical matcher has been tested on three diverse databases
in public domain. The obtained results are promising and verify our claim.
2
Acknowledgments
I would like to express my deep-felt gratitude to my advisor, Dr. Phalguni Gupta
for giving me an opportunity to work with the Biometrics Group and for his advice,
encouragement, constant support. I wish to thank him for extending me the greatest
freedom in deciding the direction and scope of my research. It has been both a
privilege and a rewarding experience working with him.
I would also like to thank all my friends and colleagues here at IIT Kanpur for
all the wonderful times I have had with them. Their valuable comments and sugges-
tions have been vital to the completion of this work. I want to thank the faculty of
Computer Science Department and the staff for providing me the means to complete
my degree and prepare for a career as a computer scientist.
And finally, I am grateful to my parents for their love, sacrifice, understanding,
encouragement and support.
Abhishek Rawat
i
Contents
List of Figures iii
List of Tables v
1 INTRODUCTION 11.1 Fingerprint as a Biometric . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 History of Fingerprint Recognition . . . . . . . . . . . . . . . 41.1.2 Fingerprint Representation . . . . . . . . . . . . . . . . . . . . 5
1.2 Motivation and Problem Definition . . . . . . . . . . . . . . . . . . . 81.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 BACKGROUND AND LITERATURE REVIEW 132.1 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2 Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 Fingerprint Image Enhancement . . . . . . . . . . . . . . . . . . . . . 172.4 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.1 Minutiae Extraction . . . . . . . . . . . . . . . . . . . . . . . 202.4.2 Pores Extraction . . . . . . . . . . . . . . . . . . . . . . . . . 222.4.3 Ridge Contour Extraction . . . . . . . . . . . . . . . . . . . . 25
2.5 Fingerprint Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5.1 Correlation-based Techniques . . . . . . . . . . . . . . . . . . 272.5.2 Minutiae-based Methods . . . . . . . . . . . . . . . . . . . . . 28
2.5.2.1 Global Matching . . . . . . . . . . . . . . . . . . . . 312.5.2.2 Local Matching . . . . . . . . . . . . . . . . . . . . . 34
2.5.3 Ridge Feature-based Matching Techniques . . . . . . . . . . . 372.5.3.1 Texture Feature based Techniques . . . . . . . . . . 372.5.3.2 Level 3 Features-based Techniques . . . . . . . . . . 39
ii
3 PROPOSED HIERARCHICAL MATCHER 423.1 Preprocessing and Enhancement . . . . . . . . . . . . . . . . . . . . . 443.2 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.3 Hierarchical Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3.1 Minutia-based Matcher . . . . . . . . . . . . . . . . . . . . . . 483.3.1.1 Local Structure Matching . . . . . . . . . . . . . . . 503.3.1.2 Consolidation step . . . . . . . . . . . . . . . . . . . 54
3.3.2 Level 3 features based Matcher . . . . . . . . . . . . . . . . . 553.3.2.1 Pore Matching . . . . . . . . . . . . . . . . . . . . . 563.3.2.2 RidgeContour Matching . . . . . . . . . . . . . . . . 573.3.2.3 Fusion of Level 2 and Level 3 features . . . . . . . . 61
4 EXPERIMENTAL RESULTS 634.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 654.3 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.4 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.5 Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.6 Timing Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5 CONCLUSION & FUTURE WORK 825.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Bibliography 85
iii
List of Figures
1.1 Fingerprint features at Level 1, Level 2 and Level 3 ([7, 54]) . . . . . 61.2 Open and closed pores ([29]) . . . . . . . . . . . . . . . . . . . . . . . 71.3 Characteristics of ridge contours and edges ([9]) . . . . . . . . . . . . 8
2.1 Architecture of a Fingerprint Verification System. . . . . . . . . . . . 142.2 Fingerprints at different resolutions a) 380 ppi, b) 500 ppi, c)1000 ppi
([29]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 A typical minutiae extraction process ([30]) . . . . . . . . . . . . . . 232.4 An example of two false matched local structures ([36]) . . . . . . . . 35
3.1 The proposed hierarchical matcher. . . . . . . . . . . . . . . . . . . . 443.2 Effect of differential finger pressure([36]). . . . . . . . . . . . . . . . 463.3 Ridge Contour extraction process in a image scanned with Cross Match
Verifier 300 scanner at 500 dpi. . . . . . . . . . . . . . . . . . . . . . 473.4 Pores extracted from a image scanned with Cross Match Verifier 300
scanner at 500 dpi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.5 The schematic description of a “star” with k = 6 neighbors. The
neighbors are chosen from all the four quadrants . . . . . . . . . . . . 493.6 Two impressions of same fingerprint at 500 dpi. . . . . . . . . . . . . 573.7 Schematic of square to circular transformation . . . . . . . . . . . . . 593.8 The basic LBP operator . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1 Sample images from Neurotechnology Database . . . . . . . . . . . . 644.2 Sample images from FVC 2004, DB3 Database . . . . . . . . . . . . . 644.3 Sample images from FVC 2006, DB2 Database . . . . . . . . . . . . . 654.4 ROC curves for proposed minutia matcher and CBFS-Kplet based
matcher, on Neurotechnology Database . . . . . . . . . . . . . . . . . 684.5 ROC curves for proposed minutia matcher and CBFS-Kplet based
matcher, on FVC 2004, DB3 Database . . . . . . . . . . . . . . . . . 694.6 ROC curves for proposed minutia matcher and CBFS-Kplet based
matcher, on FVC 2006, DB2 Database . . . . . . . . . . . . . . . . . 70
iv
4.7 ROC curves comparing the performance of LBP and ZM (Neurotech-nology Database) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.8 ROC curves comparing the performance of LBP and ZM (FVC 2004,DB3 Database) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.9 ROC curves comparing the performance of LBP and ZM (FVC 2006,DB2 Database) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.10 ROC curves for the minutia matcher and hierarchical matcher on Neu-rotechnology Database . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.11 ROC curves for the minutia matcher and hierarchical matcher on FVC2004, DB3 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.12 ROC curves for the minutia matcher and hierarchical matcher on FVC2006, DB2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.13 Distribution of number of matched minutiae, for Genuine and Impostorcases, on Neurotechnology Database . . . . . . . . . . . . . . . . . . . 78
4.14 Distribution of number of matched minutiae, for Genuine and Impostorcases, on FVC 2004, DB3 Database . . . . . . . . . . . . . . . . . . . 79
4.15 Distribution of number of matched minutiae, for Genuine and Impostorcases, on FVC 2006, DB2 Database . . . . . . . . . . . . . . . . . . . 80
v
List of Tables
4.1 Equal Error Rate (EER) comparison between proposed minutia matcherand CBFS-Kplet based matcher. . . . . . . . . . . . . . . . . . . . . . 67
4.2 Equal Error Rate (EER) comparison between proposed minutia matcherand proposed hierarchical matcher. . . . . . . . . . . . . . . . . . . . 71
4.3 Comparison of matching time for Level 3 Features. . . . . . . . . . . 81
1
Chapter 1
INTRODUCTION
Biometric based recognition, or biometrics, is the science of identifying, or verifying
the identity of, a person based on physiological and/or behavioral characteristics [14].
Physiological traits are related to the physiology of the body and mainly include
fingerprint, face, DNA, ear, iris, retina, hand and palm geometry. Behavioral traits
are related to behavior of a person and examples include signature, typing rhythm,
gait, voice etc. Biometric recognition offers many advantages over traditional PIN
number or password and token-based (e.g., ID cards) approaches. A biometric trait
cannot be easily transferred, forgotten or lost, the rightful owner of the biometric
template can be easily identified, and it is difficult to duplicate a biometric trait [20].
There are a number of desirable properties for any chosen biometric characteristic
[14]. These include:
1. Universality : Every person should have the characteristic.
2. Uniqueness : No two persons should be the same in terms of the biometric
characteristic.
2
3. Permanence : The biometric characteristics should not change, or change min-
imally, over time.
4. Collectability : The biometric characteristic should be measurable with some
(practical) sensing device.
5. Acceptability : The user population and the public in general should have no
(strong) objections to the measuring/collection of the biometric trait.
A biometric system is essentially a pattern recognition system that operates by
acquiring biometric data from an individual, extracting a feature set from the acquired
data, and comparing this feature set against the template set in the database [32].
Depending on the application context, a biometric system may operate either in
verification mode or identification mode:
• In the verification mode, an individual provides his/her biometric data and
claims an identity, usually via a PIN (Personal Identification Number), a user
name, a smart card, etc. The system then verifies the individual’s identity
by comparing the acquired biometric data with the individual’s own biometric
template(s) stored in system database. Such a system basically performs a one-
to-one comparison to determine whether the claimed identity is true or not.
• In the identification mode, the system compares the given biometric data with
the templates of all the users in the database. Therefore, the system conducts
a one-to-many comparison to establish an individual’s identity (or fails if the
subject is not enrolled in the system database) without the subject having to
claim an identity.
1.1. FINGERPRINT AS A BIOMETRIC 3
The effectiveness of a biometric system can be judged by following characteristics
[35]:
1. Performance : This refers to the achievable recognition accuracy, speed, ro-
bustness, the resource requirements to achieve the desired recognition accuracy
and speed, as well as operational (work environment of individual, e.g., manual
workers may have a large number of cuts and bruises on their fingerprints) or
environmental factors (humidity, illumination etc.) that affect the recognition
accuracy and speed .
2. Scalability : This refers to the ability to encompass large number of individuals
without a significant decrease in the performance.
3. Non-invasiveness : This refers to the ease with which the information can be
captured from individuals, without damaging an individual’s physical integrity
and ideally without special preparations by/of an individual.
4. Circumvention : This refers to the degree to which the system is resistant to
spoofs or attacks.
A practical biometric system should meet the specified recognition accuracy, speed,
and resource requirements, be harmless to the users, be accepted by the intended pop-
ulation, and be sufficiently robust to various fraudulent methods and attacks to the
system.
1.1 Fingerprint as a Biometric
A fingerprint is an impression of the friction ridges, from the surface of a finger-
tip. Fingerprints have been used for personal identification for many decades, more
1.1. FINGERPRINT AS A BIOMETRIC 4
recently becoming automated due to advancements in computing capabilities. Fin-
gerprint recognition is nowadays one of the most important and popular biometric
technologies mainly because of the inherent ease in acquisition, the numerous sources
(ten fingers) available for collection, and the established use and collections by law
enforcement agencies. Automatic fingerprint identification is one of the most reliable
biometric technologies. This is because of the well known fingerprint distinctiveness,
persistence, ease of acquisition and high matching accuracy rates. Fingerprints are
unique to each individual and they do not change over time. Even identical twins
(who share their DNA) do not carry identical fingerprints. The uniqueness can be
attributed to the fact that the ridge patterns and the details in small areas of friction
ridges are never repeated. These friction ridges develop on the fetus in their definitive
form before birth and are known to be persistent throughout life except for perma-
nent scarring. Scientific research in areas such as biology, embryology, anatomy and
histology has supported these findings [4]. Also, the matching accuracy of fingerprint
based authentication systems has been shown to be very high. Fingerprint-based au-
thentication systems continue to dominate the biometrics market by accounting for
almost 52% of authentication systems based on biometric traits [35].
1.1.1 History of Fingerprint Recognition
Fingerprints have been found on ancient artifacts recovered from excavation sites
of various civilizations [42]. However fingerprints have been used for identification
only from nineteenth century onwards. A time-line of important events that has
established the foundation of the modern fingerprint based biometric technology can
be found in [2]. Henry Fauld [21] has first scientifically suggested the individuality
and uniqueness of fingerprints. Sir Francis Galton has published the well-known book
1.1. FINGERPRINT AS A BIOMETRIC 5
entitled Fingerprints [22], in which a detailed statistical model of fingerprint analysis
and identification has been discussed. Galton has introduced Level 2 features by
defining minutia points as either ridge endings or ridge bifurcations on a local ridge.
An important advance in fingerprint identification has been made by Edward Henry,
who has established a system known as “Henry system” for fingerprint classification
[65].
In [44], Locard has introduced the science of “poroscopy”, the comparison of
sweat pores for the purpose of personal identification. Locard has stated that like
the ridge characteristics, the pores are also permanent, immutable, and unique, and
are useful for establishing the identity, especially when a sufficient number of ridges
is not available. Chatterjee has proposed the use of ridge edges in combination with
other friction ridge formations to establish individualization, which is referred to as
“edgeoscopy” [9].
Over the last few years, poroscopy and edgeoscopy have received growing attention
and have been widely studied by latent fingerprint examiners [9]. It has been claimed
that shapes and relative positions of sweat pores and shapes of ridge edges are as
permanent and unique as traditional minutia points. And when understood, they
add considerable weight to the conclusion of identification [9].
1.1.2 Fingerprint Representation
The types of information that can be collected from a fingerprint’s friction ridge
impression can be categorized as Level 1, Level 2, or Level 3 features as shown in
Figure 1.1.
At the global level, the fingerprint pattern exhibits one or more regions where
the ridge lines assume distinctive shapes characterized by high curvature, frequent
1.1. FINGERPRINT AS A BIOMETRIC 6
Figure 1.1: Fingerprint features at Level 1, Level 2 and Level 3 ([7, 54])
termination, etc. These regions are broadly classified into arch, loop, and whorl. The
arch, loop and whorl can further be classified into various subcategories. Level 1
features comprises these global patterns and morphological information. They alone
do not contain sufficient information to uniquely identify fingerprints but are used for
broad classification of fingerprints.
Level 2 features or minutiae refers to the various ways that the ridges can be
discontinuous. These are essentially Galton characteristics, namely ridge endings and
ridge bifurcations. A ridge ending is defined as the ridge point where a ridge ends
abruptly. A bifurcation is defined as the ridge point where a ridge bifurcates into
two ridges. Minutiae are the most prominent features, generally stable and robust
to fingerprint impression conditions. The distribution of minutiae in a fingerprint is
considered unique and most of the automated matchers use this property to uniquely
1.1. FINGERPRINT AS A BIOMETRIC 7
identify fingerprints. Uniqueness of fingerprint based on minutia points has been
quantified by Galton [22]. Statistical analysis has shown that Level 2 features, have
sufficient discriminating power to establish the individuality of fingerprints [56].
Level 3 features are the extremely fine intra ridge details present in fingerprints
[6]. These are essentially the sweat pores and ridge contours. Pores are the openings
of the sweat glands and they are distributed along the ridges. Studies [9] have shown
that density of pores on a ridge varies from 23 to 45 pores per inch and 20 to 40
pores should be sufficient to determine the identity of an individual. A pore can be
either open or closed, based on its perspiration activity. A closed pore is entirely
enclosed by a ridge, while an open pore intersects with the valley lying between
two ridges as shown in Figure 1.2. The pore information (position, number and
shape) are considered to be permanent, immutable and highly distinctive but very
few automatic matching techniques use pores since their reliable extraction requires
high resolution and good quality fingerprint images. Ridge contours contain valuable
Level 3 information including ridge width and edge shape. Various shapes on the
friction ridge edges can be classified into eight categories, namely, straight, convex,
peak, table, pocket, concave, angle, and others as shown in Figure 1.3. The shapes
and relative position of ridge edges are considered as permanent and unique.
Figure 1.2: Open and closed pores ([29])
1.2. MOTIVATION AND PROBLEM DEFINITION 8
Figure 1.3: Characteristics of ridge contours and edges ([9])
1.2 Motivation and Problem Definition
Fingerprint recognition is a complex pattern recognition problem. It is difficult
to design accurate algorithms capable of extracting salient features and matching
them in a robust way, especially in poor quality fingerprint images and when low-cost
acquisition devices with small area are adopted. There is a popular misconception
that automatic fingerprint recognition is a fully solved problem since it was one of
the first applications of machine pattern recognition. On the contrary, fingerprint
recognition is still a challenging and important pattern recognition problem. The real
challenge is matching fingerprints affected by: i) High displacement/or rotation which
results in smaller overlap between template and query fingerprints (this case can be
treated as similar to matching partial fingerprints), ii) Non-linear distortion caused
by the finger plasticity, iii) Different pressure and skin condition and iv) Feature
extraction errors which may result in spurious or missing features.
The vast majority of contemporary automated fingerprint authentication systems
(AFAS) are minutiae (level 2 features) based [46]. Minutiae-based systems gener-
ally rely on finding correspondences 1 between the minutia points present in “query”
and “reference” fingerprint images. These systems normally perform well with high-
1A minutiae in the “query” fingerprint and a minutiae in the “reference” fingerprint are said tobe corresponding if they represent the identical minutiae scanned from the same finger
1.2. MOTIVATION AND PROBLEM DEFINITION 9
quality fingerprint images and a sufficient fingerprint surface area. These conditions,
however, may not always be attainable. In many cases, only a small portion of the
“query” fingerprint can be compared with the “reference” fingerprint as a result the
number of minutiae correspondences might significantly decrease and the matching
algorithm would not be able to make a decision with high certainty. This effect is even
more marked on intrinsically poor quality fingers, where only a subset of the minutiae
can be extracted and used with sufficient reliability. Although minutiae may carry
most of the fingerprint’s discriminatory information, they do not always constitute
the best trade-off between accuracy and robustness. This has led the designers of fin-
gerprint recognition techniques to search for other fingerprint distinguishing features,
beyond minutiae, which may be used in conjunction with minutiae (and not as an
alternative) to increase the system accuracy and robustness.
It is a known fact that the presence of Level 3 features in fingerprints provides
minute detail for matching and the potential for increased accuracy. The forensic
experts in law enforcement often make use of Level 3 features, such as sweat pores
and ridge contours, to compare fingerprint samples when insufficient minutia points
are present in the fingerprint image or poor image quality hampers minutiae analysis.
That is, experts take advantage of an extended feature set in order to conduct a more
effective matching. Despite their discriminating property, level 3 features are barely
utilized in the commercial automated fingerprint authentication systems (AFAS), as
a result a large amount of fingerprint information is ignored by such systems. This
is mainly because, most of these authentication systems are equipped with 500 ppi
(pixels per inch) scanners, and reliably(or consistently) extracting ”fine and detailed”
Level 3 features require high resolution images. While this may have been the case
with many of the older live-scan devices, the current devices are capable of detect-
1.3. APPROACH 10
ing a reasonable amount of level three detail even at the relatively limited 500 ppi
resolution. Ray et al. [62] have presented a means of modeling and extracting pores
(which are considered as highly distinctive Level 3 features) from 500 ppi fingerprint
images. This study showed that while not every fingerprint image obtained with a
500 ppi scanner has evident pores, a substantial number of them do have. Thus, it is
a natural step to try to extract Level 3 information, and use them in conjunction with
minutiae to achieve robust matching decisions. In addition, the fine details of level 3
features could potentially be exploited in circumstances that require high-confidence
matches.
1.3 Approach
In this thesis an approach has been presented which addresses the various issues
and challenges (discussed in previous section) in fingerprint matching. The aim is
to reduce the error rates, namely False Acceptance Rate (FAR) and False Rejection
Error (FRR) in the existing fingerprint matching algorithms.
The proposed approach utilizes Level 3 features (pores and ridge contours) along
with Level 2 features for matching fingerprints at 500 ppi, in a hierarchical manner.
The first stage of hierarchical matcher is the minutia matching stage in which the
Level 2 minutiae points from the “query” and “reference” fingerprints are matched.
Based on the performance of the minutiae matcher the matching process either stops
if a match is found or continues to next stage where the Level 3 features are used to
decide match/non-match decision. The hierarchical matcher utilizes the fine details
of Level 3 features to decide the match/non-match decision in circumstances where
the decision can not be made solely on the basis of Level 2 features.
1.3. APPROACH 11
The proposed approach addresses the various challenges in fingerprint matching
in following way:
• The plastic nature of finger skin results in non-linear distortion in successive
acquisitions of the same finger. To deal with this problem the matching of
all feature classes (Level 2 and Level 3) are done within a local region. The
use of localized matching minimizes the effects of non-linear distortion. This
is because the effects of distortion does not significantly alter the fingerprint
pattern locally [35].
• Due to high displacement and rotation which are introduced during fingerprint
acquisition, different impressions of the same finger differ from each other. Most
of the existing minutia matching algorithms first align fingerprint images and
then find minutia correspondences. But in case of poor quality fingerprints,
global registration (alignment) parameters do not exist and as a result it is not
possible to get a correct alignment. The errors introduced during registration
steps can introduce errors in the subsequent steps. The proposed approach does
not use alignment at any stage and relies on rotationally invariant structures
(in case of minutia and pore matching) and features (in case of ridge contours)
• The high displacement/rotation introduced during fingerprint acquisition re-
sults in “small overlap” between query and reference fingerprints. Also the
noise introduced by several factors such as poor skin conditions, unclean scan-
ner surface etcetera, results in a very small portion of the fingerprint which can
actually be used for comparison. The use of Level 3 features in conjunction
with Level 2 features takes care of such situations. Studies [40, 41] have shown
that given a sufficiently high resolution fingerprint, the use of Level 3 features
1.4. THESIS OUTLINE 12
from fingerprint fragments results in same quantity of discriminative informa-
tion that can be extracted when Level 2 features are considered and the entire
image is used.
• Finally, due to the noise in the fingerprints the feature extraction techniques
often introduce errors such as missing or spurious minutiae/pores. The match-
ing technique should handle such cases. The proposed approach uses an elastic
string matching algorithm for pores matching and minutiae matching, which
can accommodate perturbations of minutiae/pores from their true locations
and can tolerate spurious and missing minutiae/pores. Also, for matching ridge
contours we have used statistical features based on Zernike moments and Lo-
cal Binary Patterns (LBP) which are known for their tolerance to noise and
gray-scale invariance, respectively.
1.4 Thesis Outline
The outline of the thesis is as follows: Chapter 2 discusses the literature on fin-
gerprint based verification systems. Chapter 3 presents the proposed hierarchical
fingerprint matching system. Chapter 4 presents the results and evaluations of the
proposed approach. Finally, in Chapter 5 we conclude and outline the future work.
13
Chapter 2
BACKGROUND AND
LITERATURE REVIEW
A fingerprint recognition system may operate either in verification mode or iden-
tification mode. In verification mode, the system verifies an individual’s identity by
comparing the input fingerprint with the individual’s own template(s) stored in the
database. In, the identification mode, the system identifies an individual by searching
the templates of all the users in the database for a match. The fingerprint classifi-
cation and indexing techniques are used to speed up the search in fingerprint based
identification systems. The fingerprint feature extraction and matching algorithms
are usually quite similar for both fingerprint verification and identification problems.
In this thesis the focus is on fingerprint based verification systems.
The various stages in a fingerprint verification system is shown in Figure 2.1.
The first stage is the data acquisition stage in which a fingerprint image is obtained
from an individual by using a sensor. The next stage is the pre-processing stage
in which the input fingerprint is processed with some standard image processing
2.1. DATA ACQUISITION 14
algorithms for noise removal and smoothening. The pre-processed fingerprint image
is then enhanced using specifically designed enhancement algorithms which exploit
the periodic and directional nature of the ridges. The enhanced image is then used to
extract salient features in the feature extraction stage. Finally, the extracted features
are used for matching in the matching stage. This chapter discusses the current state
of the art feature extraction techniques and gives a literature survey on the various
fingerprint matching algorithms.
Figure 2.1: Architecture of a Fingerprint Verification System.
2.1 Data Acquisition
Traditionally, in law enforcement applications fingerprints were acquired off-line
by transferring the inked impression on a paper. Nowadays, the automated fingerprint
verification systems use live-scan digital images of fingerprints acquired from a fin-
gerprint sensor. These sensors are based on optical, capacitance, ultrasonic, thermal
and other imaging technologies.
2.1. DATA ACQUISITION 15
The optical sensors are most popular and are fairly inexpensive. These sensors
are based on FTIR (Frustrated Total Internal Reflection) technique. When a finger
touches the sensor surface (which actually is a side of a glass prism), one side of the
prism is illuminated through a diffused light. While the fingerprint valleys that do
not touch the sensor surface reflect the light, ridges that touch the surface absorb the
light. The sensor exploits this differential property of light reflection to differentiate
the ridges (which appear dark) from the valleys.
The capacitive sensors utilize the principle associated with capacitance to form
the fingerprint images. These sensors consists of a two-dimensional array of metal
electrodes. Each metal electrode acts as one plate of a parallel-plate capacitor and
the contacting finger acts as the other plate. When a finger is pressed on the sensor
surface, it creates varying capacitance values which depends inversely on the distance
between the sensing plate and the finger surface. The ridges thus have increased
capacitance compared to valleys. This variation is then converted into an image of
the fingerprint.
The Ultrasound technology based sensors are the most accurate of the fingerprint
sensing technologies. It uses ultrasound waves and measures the distance based on
the impedance of the finger, the plate and air. The thermal sensors are made up
of pyro-electric materials, which generate a temporary electrical potential when they
are heated or cooled. When a finger is swiped across the sensor, there is differential
conduction of heat between the ridges and valleys (as skin is a better conductor than
the air in the valleys) which is measured by the sensor.
One of the most essential characteristics of a digital fingerprint image is its res-
olution, which indicates the number of dots or pixels per inch (ppi). The minimum
resolution that allows the feature extraction algorithms to locate minutiae is 250 to
2.2. IMAGE PREPROCESSING 16
300 ppi. The FBI’s standard for resolution of fingerprint sensors is 500 ppi. A large
number of automated fingerprint verification systems accept 500 ppi fingerprints. Fig-
ure 2.2 shows the fingerprints captured at different resolutions. At 500 ppi pores are
visible but in-order to extract pores reliably a significantly higher resolution (1000
ppi) of image is needed.
Figure 2.2: Fingerprints at different resolutions a) 380 ppi, b) 500 ppi, c)1000 ppi([29]).
2.2 Image Preprocessing
The preprocessing steps try to compensate for the variations in lighting, contrast
and other inconsistencies which are introduced by the sensor during the acquisition
process. The following preprocessing steps are generally used:
• Gaussian Blur : A Gaussian blur is a convolution operation which is applied
to the original fingerprint image to reduce image noise (introduced by sensor).
The Gaussian kernel used for blurring is given by:
G(x, y) =1
2πσe−
x2+y2
2σ2 (2.1)
where σ is the variance of the gaussian distribution, x is the distance from the
2.3. FINGERPRINT IMAGE ENHANCEMENT 17
origin along the horizontal axis, y is the distance from the origin along the
vertical axis.
• Sliding-window Contrast Adjustment : Sliding-window contrast adjustment is
used to compensate for any lighting inconsistencies within a fingerprint and
to obtain contrast consistency among different fingerprints. A m ×m window
is centered on each pixel of the Gaussian blurred image. The corresponding
output pixel value is then calculated by finding the minimum and maximum
intensity values within the window and by using:
O(i, j) = (I(i, j)−mini,j)(
255
maxi,j −mini,j
)(2.2)
where I(i, j) and O(i, j) are the input and output pixel intensity values respec-
tively, and mini,j and maxi,j are the minimum and maximum pixel intensity
values within the m×m window centered at pixel (i, j).
• Histogram-based Intensity Level Adjustment : This final step is used to further
enhance the ridges and valleys. The image’s histogram is examined to determine
two intensity values: a lower threshold L and an Upper threshold U . The
intensity value (I(i, j)) of each pixel is processed using these thresholds to obtain
the output pixel intensity (O(i, j)) which is given by the following equation :
O(i, j) =
255 if I(i, j) >U
0 if I(i, j) <L
(I(i, j)−L)(
255U−L
)else
(2.3)
2.3 Fingerprint Image Enhancement
The performance of fingerprint feature extraction and matching algorithms relies
heavily on the quality of the input fingerprint images. Due to various factors such as
2.3. FINGERPRINT IMAGE ENHANCEMENT 18
skin conditions (e.g., wet, dry, cuts, scars and bruises), non-uniform finger pressure,
noise introduced by sensor and inherently poor-quality fingers (e.g, manual workers,
elderly people), a significant percentage of fingerprint images is of poor quality. In-
fact, a single fingerprint image may contain regions of good, medium, and poor quality.
Thus an enhancement algorithm which can improve the quality of ridge structure is
necessary.
A survey on different enhancement techniques can be found in [35]. The most
widely used technique for fingerprint image enhancement is based on contextual filters.
The parameters of these filters change according to the local context i.e. local ridge
frequency and orientation. Such a filter can capture the local information and can
use them to efficiently remove the undesired noise (i.e. fill small ridge breaks, fill
intra-ridge holes, and separate parallel touching ridges) and preserve the true ridge
and valley structure. The filters themselves may be defined in spatial or in the Fourier
domain.
This section describes a popular enhancement algorithm by Sharat et al. [16],
which uses contextual filtering in Fourier domain. The algorithm consists of two
stages. The first stage consists of STFT (Short Time Fourier Transform) analysis
and the second stage performs the contextual filtering.
The STFT analysis stage yields the ridge orientation image (O(x, y)), the ridge
frequency image (F (x, y)) and the region mask (R(x, y)). The orientation image
represents the instantaneous ridge orientation at every point in the fingerprint image.
The frequency image indicates the average inter ridge distance distance within a local
region. The region mask indicates the foreground regions of the image where ridge
structures are present. During STFT analysis, the image is divided into overlapping
windows. This is done based on the assumption that the image has a consistent
2.3. FINGERPRINT IMAGE ENHANCEMENT 19
orientation and frequency within a small local region. This assumption however is
not true for regions with singularities such as core, delta etc. The Fourier spectrum
within each window is analyzed and a probabilistic approximation of the dominant
ridge orientation and frequency within each window are obtained. An energy map
E(x, y) is also obtained during STFT analysis where each value indicates the energy
content of the corresponding block. This energy map can be used as a region mask
to distinguish between the foreground and background regions (the background and
noisy regions are characterized by very little energy content in the Fourier spectrum.).
The orientation image (O(x, y)) is then used to compute the Coherence Image which
contains coherence values of the various regions within the fingerprint. The coherence
value is low in regions with points of singularities (core, delta etc.). The coherence
image is then used to adapt the angular bandwidth of the directional filter.
The resulting contextual information from STFT analysis is then used to filter
each overlapping window (B) in the Fourier domain. The filter used is separable in
radial (Equation 2.5) and angular (Equation 2.6) domain and is given by
H(r, φ) = Hr(r)Hφ(φ) (2.4)
Hr(r) =
√[(rrBW )2n
(rrBW )2n + (r2 − r2c )
2n
](2.5)
H(φ)(φ) =
cos2 π2
(φ−φc)φBW
if |φ| < φBW
0 otherwise
(2.6)
where rBW and φBW are the radial bandwidth and angular bandwidth, respectively;
rc and φc are the mean frequency and mean orientation, respectively. The enhanced
block B′
is obtained by:
F = F ∗H(r, φ) (2.7)
2.4. FEATURE EXTRACTION 20
B′= FFT−1(F ) (2.8)
Finally, the results of each analysis window is tiled to obtain the enhanced image.
2.4 Feature Extraction
Chapter 1 introduced the fingerprint features at different Levels. The feature
extraction technique for minutiae points (bifurcation and endings), pores and ridge
contours is described in this section.
2.4.1 Minutiae Extraction
Most of the existing minutia extraction techniques trace the fingerprint skeleton
to find the different type of minutia points. The ridge bifurcation and ridge endings
are the most prominent minutiae types and have been extensively used by matching
algorithms. The flowchart of a typical minutia extraction algorithm is depicted in
Figure 2.3. It consists of following stages:
• Orientation Estimation : A fingerprint image is a oriented texture pattern and
the ridge orientation at a pixel (x, y) is the angle that the ridges within a small
neighborhood centered at (x, y), form with the horizontal axis. The fingerprint
image is first divided into a number of non-overlapping blocks. An analysis of
grayscale gradients within a block is done to estimate the representative ridge
orientation within that block. Different approaches such as optimization[61],
averaging [39] or voting[47] could be used to determine the block orientation.
• Segmentation : During this stage the portions of fingerprint image depicting
2.4. FEATURE EXTRACTION 21
the finger (foreground) is segmented. This step is useful in order to avoid
the extraction of spurious features from background and noisy regions within
a fingerprint. The foreground and background can be differentiated by the
presence of an oriented pattern in the foreground and of an isotropic pattern
(i.e., one without a dominant orientation) in the background. The simplest
approaches segment the foreground by global or local thresholding. Since the
fingerprint background is not always uniform and lighter than foreground (due
to the presence of noise such as dust, grease on the sensor surface), a simple
approach based on local or global thresholding is not effective and more robust
segmentation techniques [45, 10, 60] are required.
• Ridge Detection : An important property of the ridges in a fingerprint image is
that the gray level values on ridges attain their local maxima along a direction
normal to the local ridge orientation [30]. The ridge pixels are identified based
on this property. The resulting ridge map often contains false ridges in the
form of holes and speckles (due to presence of noise, breaks, and smudges, etc
in the input image). The ridge map is cleaned (false ridges are removed) using
a connected component algorithm [57]. Finally, the ridges are thinned using
standard thinning algorithm [50].
• Minutiae Detection : The minutia points are then extracted from the thinned
ridge map by examining the 8-neighborhood of each ridge skeleton pixel. Let
(x, y) denote a pixel on a thinned ridge, and N0, N1, ..., N7 denote its eight neigh-
bors. A pixel (x, y) is a ridge ending if(∑7
i=0Ni
)= 1 and a ridge bifurcation
if(∑7
i=0Ni
)> 2. The minutiae points thus obtained in the above step may
contain many spurious minutiae. This may occur due to the presence of noise,
2.4. FEATURE EXTRACTION 22
ridge breaks (even after enhancement) and image processing artifacts.
• Postprocessing : A number of heuristics are used to remove spurious minutiae.
False minutia points are generally obtained at the borders as the ridges ends
abruptly. These false minutiae at the borders can be recognized by analyzing
the number of foreground pixels in a region around minutia point. If number
of foreground pixels is relatively small then the minutia point can be removed.
Too many minutiae in a small regions, very close ridge ending (with orientations
anti-parallel to each other) and two very closely located bifurcations sharing a
common short ridge may indicate spurious minutiae and could be discarded
[27].
2.4.2 Pores Extraction
Pores are extremely fine details which are lost after the enhancement stage. There-
fore, for pore extraction the enhancement stage is omitted and pores are directly ex-
tracted from the preprocessed image. The pore extraction algorithm can be broadly
classified into two classes: the first class of algorithms extract pores by tracing fin-
gerprint skeletons, the second class of algorithms extract pores directly from gray
scale image. Stosz et al. [40] and Kryszczuk et al. [67] have proposed skeletonization
based approach for pore extraction. The skeletonization based approach is reliable
for extracting pores in good quality (and high resolution) images. As the image res-
olution decreases or the skin condition is not favorable, the method does not give
reliable results. In [29] Jain et al. have proposed a pore extraction technique directly
from gray scale image. The majority of existing approaches for pore extraction con-
sider only location information of pores for matching. The pores are distributed over
2.4. FEATURE EXTRACTION 23
Figure 2.3: A typical minutiae extraction process ([30])
2.4. FEATURE EXTRACTION 24
ridges and using orientation detail can provide additional information for matching.
A recent study [23] by the International Biometric Group has proposed a new ap-
proach for pore extraction which utilizes orientation information of pores along with
the location information. The approach proposed in [23] is presented in this section.
The first step in pore extraction process is the estimation of ridge orientation.
This data is utilized later in the representation of pores. The Local Ridge orientation
is determined by the least square estimate method [25]. The fignerprint image is
first divided into a number of non-overlapping blocks of dimension w × w. For each
pixel (i, j) in the preprocessed image, gradients in both the horizontal and vertical
direction (Equation 2.9 and Equation 2.10) are calculated by using a Sobel operator.
For each w × w block, the gradient along both the x and y directions is summed
(Equation 2.11 and Equation 2.12) and finally the arctangent is used to obtain the
representative orientation (Equation 2.13).
δx =(I(i− 1, j − 1) + 2I(i− 1, j) + I(i− 1, j + 1))−
(I(i+ 1, j − 1) + 2I(i+ 1, j) + I(i+ 1, j + 1))(2.9)
δy =(I(i− 1, j − 1) + 2I(i, j − 1) + I(i+ 1, j − 1))−
(I(i− 1, j + 1) + 2I(i, j + 1) + I(i+ 1, j + 1))(2.10)
Vx(i, j) =
i+w/2∑u=i−w/2
j+w/2∑v=j−w/2
2δx(u, v)δy(u, v) (2.11)
Vy(i, j) =
i+w/2∑u=i−w/2
j+w/2∑v=j−w/2
(δ2x(u, v)− δ2
y(u, v)) (2.12)
θ(i, j) =1
2arctan
(Vy(i, j)
Vx(i, j)
)(2.13)
2.4. FEATURE EXTRACTION 25
Next the pores are extracted from the preprocessed image. The pores are first
enhanced by convolving the preprocessed fingerprint image with a Mexican hat ker-
nel. The Mexican hat mother wavelet is defined in Equation 2.14 and the daughter
wavelets are defined in Equation 2.15.
Ψ(i, j) =(1− x2 − y2
)ex2+y2
2 (2.14)
Ψa,b(λ) =1√a
Ψ
(λ− ba
)(2.15)
where a is the factor by which mother wavelet is scaled (dilated), b is the factor
by which the mother wavelet is translated (or shifted) and λ specifies the center of
daughter wavelet.
After the convolution step, the fingerprint image is thresholded with a single-point
threshold (T ). After this step the pores have an intensity of 255. The pores are then
extracted by a blob detector which locates groups of connected pixels (pores) with
an intensity of 255 and with size within a pre-determined range. Each pore thus
extracted is represented by the coordinates of the central pixel and an orientation θ,
which is the ridge orientation at that particular location.
2.4.3 Ridge Contour Extraction
Ridge contours can be extracted by using classical edge detection algorithms.
However, these algorithms are sensitive to creases, pores etc. and as a result the
detected edge contours are often very noisy (especially in low resolution images).
Jain et al. [29] have proposed an algorithm to extract the ridge contours which uses
a simple filter to detect ridge contours. The algorithm can be described as follows:
First, the image is enhanced by using Gabor filters [24]. Next, a wavelet transform
2.5. FINGERPRINT MATCHING 26
is applied to the original fingerprint image. The enhancement ridge contours are
obtained by using a linear subtraction of wavelet response and gabor enhanced image.
The resulting image is then binarized using a heuristically determined threshold.
Finally, the binarized image is convolved with a filter H (Equation 2.16) and ridge
contours are extracted.
r(x, y) =∑n,m
Ib(x, y)H(x− n, y −m) (2.16)
where filter H = (0, 1, 0; 1, 0, 1; 0, 1, 0) counts the number of neighborhood edge points
for each pixel. A pixel (x, y) is classified as a ridge contour pixel if r(x, y) = 1 or 2.
2.5 Fingerprint Matching
A variety of automatic fingerprint matching algorithms have been proposed in
the pattern recognition literature. This chapter provides a survey of existing ap-
proaches for automatic fingerprint matching. Most of these algorithms have no dif-
ficulty in matching good quality fingerprint images, but matching low quality and
partial fingerprints remains a challenging problem. The main approaches proposed
in the literature for fingerprint matching can be roughly classsified into three cate-
gories and they are correlation-based matching , minutiae-based matching and ridge
feature-based matching. In correlation-based fingerprint matching, the template and
query fingerprint images are spatially correlated to estimate the degree of similar-
ity between them. The minutia-based techniques essentially consists of finding the
alignment between the query and template minutia points. The ridge feature-based
techniques rely on various features of fingerprint ridge pattern such as ridge shape,
texture information, local orientation and frequency. A very good literature survey
on fingerprint recognition can be found in [35].
2.5. FINGERPRINT MATCHING 27
2.5.1 Correlation-based Techniques
Let T and Q denote the template and query fingerprint images, respectively. The
sum of squared differences (SSD) between the intensities of corresponding pixels in
the two images, can be used as a measure of the diversity between them.
SSD(T,Q) = ||T −Q||2 = (T −Q)t(T −Q) = ||T ||2 + ||Q||2 − 2T tQ (2.17)
In the above equation, the superscript “t” denotes the transpose operation on a vector.
The cross-correlation (CC) between T and Q is given by, CC(T,Q) = T tQ. If the
terms ||T ||2 and ||Q||2 in above equation are constants, the diversity (SSD(T,Q))
becomes inversely proportional to cross-correlation, which constitutes the third term
(−2.CC(T,Q)) in Equation 3.1. The cross-correlation then becomes a measure of
similarity. Two fingerprint impressions from same finger differ due to many factors,
as discussed in the beginning of this chapter. Thus, additional steps are required
before similarity between T and Q can be calculated.
Let Q(∆x,∆y,θ) represent a transformation of the query image, where ∆x and ∆y
are translation parameters along x and y direction, and θ is the rotation parameter.
Them a similarity measure can be given by:
S(T,Q) = max∆x,∆y,θ
CC(T,Q(∆x,∆y,θ)) (2.18)
S(T,Q) only considers the rotation and translation factors and thus is not a very
accurate measure of similarity. This method fails if the images are highly distorted.
The distortion is more pronounced in global fingerprint patterns, thus considering
local regions can minimize distortion to some extent. [11, 51] presents some of the
approaches to localized correlation based matching. Also variable finger pressure and
skin condition cause image brightness, contrast and ridge thickness to vary across dif-
ferent acquisitions of same finger. The use of more sophisticated correlation measures
2.5. FINGERPRINT MATCHING 28
such as the normalized cross-correlation or the zero-mean normalized cross-correlation
may compensate for contrast and brightness variations and applying a proper combi-
nation of enhancement, binarization, and thinning steps (performed on both T and
Q) may limit the ridge thickness problem [35]. The computation of maximum corre-
lation (S(T,Q)) in spatial domain is very expensive. The computational complexity
can be reduced and translation invariance can be achieved by calculating correlation
in fourier domain [18] .
T ⊗Q = F−1(F ∗(T )× F (Q)), (2.19)
where ⊗ denotes the correlation in spatial domain, ∗ denotes the complex conjugate,
× denotes the point-by-point multiplication of two vectors, F (.) and F−1(.) denote
the Fourier transform and inverse Fourier transform, respectively. Rotation has to be
dealt with separately in this technique. Fourier-Mellin transform [68] can be used to
achieve both rotation and translation invariance.
2.5.2 Minutiae-based Methods
Let T and Q be the feature vectors, representing minutiae points, from the tem-
plate and query fingerprint, respectively. Each element of these feature vectors is
a minutia point, which may be described by different attributes such as location,
orientation, type , quality of the neighbourhood region, etc. The most common rep-
resentation of a minutia is the triplet x, y, θ, where x, y is the minutia location and θ
is the minutia angle. Let the number of minutiae in T and Q be m and n, respectively.
T = m1,m2, .......,mm, mi = xi, yi, θi, i = 1...m
Q = m′
1,m′
2, .......,m′
n, m′
j = x′
j, y′
j, θ′
j, j = 1...n
2.5. FINGERPRINT MATCHING 29
A minutia mi in T and m′j in Q are considered matching, if following conditions are
satisfied:
sd(m′
j,mi) =√
((x′j − xi)2 + (y
′j − yi)2) ≤ r0 (2.20)
dd(m′
j,mi) = min(|θ′
j − θi|, 360− |θ′
j − θi|) ≤ θ0 (2.21)
Here, r0 and θ0 are the parameters of the tolerance window which is required to com-
pensate for errors in feature extraction and distortions caused due to skin plasticity.
The number of “matching” minutia points can be maximized, if a proper align-
ment (registration parameters) between query and template fingerprints can be found.
Correctly aligning two fingerprints requires finding a complex geometrical transfor-
mation function (map()), that maps the two minutia sets (Q and T ). The desirable
characteristics of map() function are: it should be tolerant to distortion, it should re-
cover rotation, translation and scale1 parameters correctly. Let match() be a function
defined as:
match(m′′
j ,mi) =
1 if m′′j and mi satisfy (2.20), (2.21)
0 otherwise(2.22)
where, map(m′j) = m
′′j . Thus, the minutia matching problem can be formulated as
[35]:
maxP
m∑i=1
match(map(m′
P (i)),mi) (2.23)
where P () is the minutia correspondence function that determines the pairing between
the minutia points in Q and T . A minutiae-based matching algorithm thus attempts
to find the appropriate mapping function (map()) and correspondence function (P ()),
so that the number of matching minutia points, between T and Q, can be maximized.
1scale is considered when the fingerprints are captured at different resolutions
2.5. FINGERPRINT MATCHING 30
If either of P () or map() is known then solving Equation 2.23 becomes a very trivial
task. But, in practice, neither map() nor P () is known apriori. The errors in minu-
tia extraction (spurious, missing minutia and measurement errors) further make the
minutia matching problem very hard.
A number of minutia matching algorithms have been proposed in literature. These
algorithms can be broadly classfied as:
• Global Matching : This approach tries to simultaneously align all the minu-
tia points. The alignment could be either implicit or explicit. The Implicit
alignment technique tries to find the point correspondences and in the process
optimal alignment is obtained. The explicit alignment technique on the other
hand explicitly aligns the minutia sets first, before finding the point correspon-
dences.
• Local Matching : This approach tries to match local minutia structures; local
structures are characterized by attributes that are invariant with respect to
global transformations.
The local versus global matching is a trade-off among simplicity, low computational
complexity, high distortion tolerance (local matching), and high distinctiveness (global
matching). Local matching techniques are more robust to non-linear distortion and
partial overlaps when compared to global approaches. Matching local minutiae struc-
tures relax global spatial relationships which are considered to be highly distinctive,
and therefore reduce the amount of information available for discriminating finger-
prints.
2.5. FINGERPRINT MATCHING 31
2.5.2.1 Global Matching
The minutia matching problem has been addressed as point pattern matching
problem in literature. A number of approaches to point pattern matching exists in
literature. Some of the approaches are discussed here.
The relaxation approach [58] iteratively adjusts the confidence level of each cor-
responding pair based on its consistency with other pairs until a certain criterion is
satisfied. Let pij be the probability that point i corresponds to point j, and c(i, j;h, k)
be a compatibility measure between the pairing (i, j) and (h, k). At each iteration r,
pij is incremented if it increases the compatibility of other points and is decremented
otherwise.
p(r+1)ij =
1
m
m∑h=1
[maxk=1..n
c(i, j;h, k).p(r)ij
], i = 1..m, j = 1..n (2.24)
At convergence, each point i is associated with a point j such that pij = maxs pis.
The iterative nature of this approach makes it slow and unsuitable for automatic
matching.
Hough transform based approach [66] are quite popular for minutia matching.
This approach converts point pattern matching to a problem of detecting the highest
peak in the hough space of transformation parameters. The hough space of transfor-
mation parameters consists of all the possible values of parameters under the assumed
distortion model. Ratha et al. [59] have used a transformation space consisting of
quadruples (∆x,∆y, θ, s), representing translation along x direction, translation along
y direction, rotation and scale parameters, respectively. To avoid computation com-
plexity the search space is discretized into a finite set of values:
∆x ∈ ∆x1,∆x2, ....,∆xa
∆y ∈ ∆y1,∆y2, ....,∆yb
2.5. FINGERPRINT MATCHING 32
θ ∈ θ1, θ2, ...., θc
s ∈ s1, s2, ...., sd
A four dimensional accumulator A of size (a × b × c × d) is maintained. Each cell
A(i, j, k, l) represents the likelihood of the transformation parameters (∆xi,∆yj, θk, sl).
The following algorithm is used to accumulate evidence:
for each mi in T
for each m′j in Q
for each θ ∈ θ1, θ2, ...., θc
if dd(θ′j + θ, θi) ≤ θ0
for each s ∈ s1, s2, ...., sd
∆x
∆y
=
xi
yi
− s. cos θ − sin θ
sin θ cos θ
x
′j
y′j
Let (∆x+,∆y+) be the quantization of (∆x,∆y) to the nearest bin
A [∆x+,∆y+, θ, s] = A [∆x+,∆y+, θ, s] + 1
At the end of the accumulation process, the best alignment transformation is
obtained as
(∆x∗,∆y∗, θ∗, s∗) = argmax(∆x+,∆y+,θ,s)A[∆x+,∆y+, θ, s
]A hierarchical Hough transform-based algorithm [38] may be used to reduce the size
of the accumulator array by using a multi-resolution approach.
2.5. FINGERPRINT MATCHING 33
It can be shown that if a perfect pre-alignment could be achieved, the minutiae
matching problem could be reduced to a simple pairing problem. Most minutia-based
matchers first transform (register) the input and template fingerprint features into
a common frame of reference. The parameters of alignment are typically estimated
either by (i) superimposing singular points in the fingerprints, e.g., core and delta
segments; (ii) correlating the orientation images; or (iii) by correlating ridge features
(e.g., length and orientation of ridges). Jain et al [34] have proposed an alignment
based minutia matching approach that uses ridge features for alignment and an adap-
tive elastic string matching algorithm [19] for matching the pre-aligned minutia sets.
In their approach, each minutia is associated with the ridge on which it resides. The
ridge is represented as a planar curve with origin coincident with the minutia location
and x-axes along the minutia direction. The global transformation parameters (∆x,
∆y and θ) are caculated from a pair of matching ridges. Finding such a pair involves
iteratively matching pairs of ridges, untill a pair is found whose matching degree ex-
ceeds a certain threshold. The pair found is then used for aligning the query and
template minutia sets. The minutia corresponding to the matching ridges are used as
reference minutia points during matching stage. Each minutia in Q and T is converted
to a polar coordinate system with respect to the reference minutia in its set. Both
Q and T are transformed into a string of minutia points, ordered in increasing order
of radial angle. The strings are matched using a dynamic programming technique
to find their edit distance. The use of an adaptive tolerance window for matching
minutia points tolerates local distortion. And matching minutiae representations by
their edit distance tolerates missing and spurious minutiae. However, an exhaustive
reordering and matching is required to deal with the problem of “order flips”, which
may be caused due to measurement errors introduced during feature extraction.
2.5. FINGERPRINT MATCHING 34
The alignment based approaches are not very accurate, as reliably extracting
singular points from low quality images is difficult. They suffer when image is of poor
quality or is highly distorted. In such cases global registration parameters does not
exist and errors introduced during alignment step lead to errors in further steps.
2.5.2.2 Local Matching
The local matching approaches rely on evidence accumulated from matching local
structures in neighbourhood of minutia points. These local structures are generally
characterized by properties that are invariant with respect to global transformation,
and therefore are suitable for matching without any apriori alignment. However, local
neighborhoods do not sufficiently capture the global structural relationships thereby
making false accepts very common. Thus, it is possible that local minutia structures
in two non-matching fingerprints might match. Figure 2.4 shows an example of two
false matched local structures. They are similar at the local structures but conflict
with each other in global context (at very different locations with respect to the core
and delta points).
To deal with this problem, the local matching algorithms use an additional consol-
idation step to check whether the locally matched minutia points match at global level
or not. A large number of local matching techniques has been proposed in literature.
An overview of some of the algorithms is discussed here.
Jiang and Yau [37] use a local structure formed by a central minutia and its two
nearest-neighbour minutiae; the feature vector vi associated with the minutia mi,
whose nearest neighbors are mj and mk is :
vi = [dij, dik, θij, θik, φij, φik, nij, nik, ti, tj, tk]
where dab is the distance between minutiae ma and mb, φab is the direction difference
2.5. FINGERPRINT MATCHING 35
Figure 2.4: An example of two false matched local structures ([36])
between the angle θa of ma and the direction of the edge connecting ma to mb, nab is
the ridge count between ma and mb, and ta is the minutia type of ma . For all minutia
pairs (mi,m′j), mi ∈ T and m
′j ∈ Q, a weighted distance between their vectors vmi
and vm′j
is calculated. An additional consolidation step is used to enforce the result of
local matching. The best matching pair (with least distance ) is used for registering
the two minutia sets. The feature vectors of the remaining aligned pairs are matched
and a final score is computed by taking into account contributions from first stage
and consolidation stage.
Ratha et al. [26] have proposed a “star” representation for capturing local minutia
structure in the form of a minutia adjacency graph (MAG). The star associated with
a minutia mi is the graph Gi = (Vi, Ei) consisting of the set of vertices Vi containing
all minutia points mj whose distance (dij) from mi is less than a predefined threshold,
and the set of edges Ei contining edges from mi to all vertices in Vi. Each edge eij ∈ Ei
is labelled with a 5-tuple (mi,mj, dij, rcij, φij), where rcij is the ridge count between
2.5. FINGERPRINT MATCHING 36
mi and mj and φij is the angle subtended by the edge with the x-axis. During local
matching, each “star” in Q is matched with each star in T , the matching is performed
by clockwise traversing the corresponding graphs in an increasing order of radial angle.
At the end of local matching stage, a set TOP of best matched star pairs is returned
and these pairs are further checked for consistency during the consolidation stage. A
pair of stars is consistent if their spatial relationships (distance and ridge count) with
a minimum fraction of the remaining stars in TOP are consistent.
Sharath et al. [15] define a local structure representation called K − plet that is
invariant under translation and rotation. The K − plet consists of a central minutiae
mi and K other minutiae m1,m2...mK chosen from its local neighbourhood. Each
neighbourhood minutiae mj is defined as a 3-tuple (φij, θij, rij), where rij represents
the Euclidean distance between mi and mj, θij is the relative orientation of mj with
respect to the central minutiae mi, and φij represents the direction of the edge con-
necting mj and mi, with respect to the orientation of central minutia. A given finger-
print is represented as a directed graph G(V,E). V is the set of vertices containing
all minutia points and E is the set of directed edges containing all possible (mi,mj)
pairs, where mi and mj are neighbouring minutia points. Each vertex v is denoted
by a 4-tuple (xv, yv, θv, tv) representing the co-ordinate, orientation and type of minu-
tiae. Each directed edge (u, v) is labelled with corresponding K − plet co-ordinates
(ruv, φuv, θuv). Let G(V,E), H(V′, E
′) be the graphical representation for T and Q,
respectively. The matching algorithm is based on matching a local neighbourhood
(K-plets) and propagating the match to the K-plet of all the minutiae in the neigh-
bourhood successively. The minutia points in K-plet are arranged in increasing order
of radial distance rij and a dynamic programming based string alignment algorithm
[19] is used for local matching. The consolidation of the local matches is done by a
2.5. FINGERPRINT MATCHING 37
Coupled Breadth First Search (CBFS) algorithm that propagates the local matches
simultaneously in both the fingerprints. The CBFS algorithm requires two vertices
vi ∈ T and vj ∈ Q as the source nodes from which to begin the traversal. The CBFS
algorithm is executed for all possible correspondence pairs (vi, vj). The pair with
maximum number of matches is used to compute the final matching score.
2.5.3 Ridge Feature-based Matching Techniques
There are various reasons which induce designers of fingerprint recognition tech-
niques to look for features, beyond minutiae. Minutiae based matching algorithms do
not perform well in case of small fingerprints that have very few minutiae. Reliable
extraction of minutiae from poor quality fingerprints is very difficult. Furthermore,
registering minutiae representation is very challenging. The most commonly used al-
ternative features are i) global and local texture information, and ii) Level 3 features
2.5.3.1 Texture Feature based Techniques
Global and local texture information are important alternatives to minutiae. Tex-
tures are defined by spatial repetition of basic elements, and are characterized by
properties such as scale, orientation, frequency, symmetry, isotropy, and so on. Fin-
gerprint ridge lines are mainly described by smooth ridge orientation and frequency,
except at singular regions. These singular regions are discontinuities in a basically
regular pattern and include the loop(s) and the delta(s) at a coarse resolution and the
minutiae points at a high resolution. Global texture analysis fuses contributions from
different characteristic regions into a global measurement and, as a result, most of
the available spatial information is lost. Local texture analysis has proved to be more
effective than global feature analysis; although most of the local texture information
2.5. FINGERPRINT MATCHING 38
is carried by the orientation and frequency images, most of the proposed approaches
extract texture by using a specialized bank of filters.
Jain et al. [31] have proposed a filter bank based local texture analysis technique.
The fingerprint image is tesselated into 80 cells (5 bands and 16 cells) around the
core point and a feature vector representing texture information is obtained from the
tesselation. The feature vector consists of an ordered enumeration of the features
extracted from the local information contained in each sector. Thus the feature
elements capture the local texture information and the ordered enumeration of the
tesselation captures the global relationship among the local contributions. A bank of
8 gabor filters (8 orientations and 1 scale = 1/10) is used to obtain texture informtion
from each sector. Thus each fingerprint is represented by a 640 (80 × 8) fixed-size
feature vector called the FingerCode. The generic element Vij of the vector (i = 1..80
is the cell index, j = 1..8 is the filter index) denotes the energy revealed by the filter
j in the cell i, and is computed as the average absolute deviation (AAD) from the
mean of the responses of the filter j over all the pixels of the cell i. Matching two
fingerprints is then performed by computing the Euclidean distance between their
Finger-Codes. The disadvantage of this approach is that it uses a core points as a
reference. When the core points cannot be reliably detected, or it is close to the border
of the fingerprint area, the FingerCode of the input fingerprint may be incomplete or
incompatible with the template. Also, fingerCodes are found to be not as distinctive
as minutiae. However, they carry complementary information which can be combined
with minutiae to yield higher accuracy. In [28] and [64] a variant of above method
has been proposed where the two fingerprints to be matched are first aligned using
minutiae and tessellation is performed over a square mesh grid.
Nanni et al.[52] have proposed a hybrid approach where fingerprints are pre-
2.5. FINGERPRINT MATCHING 39
aligned using minutiae, and then texture features are extracted by invariant local
binary patterns (LBP) from the fingerprint image convolved with Gabor filters. The
fingerprint image is first divided into several sub-windows, then a segmentation step
is performed in order to discard the background, finally each sub-window is convolved
with a bank of Gabor filters and LBP are calculated. The comparison between the
unknown fingerprint and the stored template is performed by the average Euclidean
distance calculated among the correspondent couples of foreground sub-windows.
2.5.3.2 Level 3 Features-based Techniques
The use of Level 3 features in an automated fingerprint identification system has
been studied by only a few researchers. Existing literature is exclusively focused
on the extraction of pores in order to establish the viability of using pores in high
resolution fingerprint images to assist in fingerprint identification.
Stosz and Alyea [67] have presented a technique which utilizes pore information,
extracted from high-resolution fingerprint images, to augment matching processes
that use only Level 2 data. The images were taken by a custom built optical/electronic
sensor in a controlled environment. An enrollment process is required in which singu-
lar points (used as origin), minutia points, and characteristic regions containing pores
are selected manually and stored in template. Matching is initiated by determining
the origin of the input fingerprint which is defined as the position of maximum correla-
tion between the origin stored in the template and the input binary fingerprint. Next,
the remaining binary image segments from the template are compared to the input
fingerprint. Two pores are considered matched if they lie within a certain bounding
box. Finally, experimental results based on a database of 258 fingerprints from 137
individuals showed that by combining minutia and pore information, a lower FRR
2.5. FINGERPRINT MATCHING 40
of 6.96 percent (compared to 31 percent for minutiae alone) can be achieved at a
FAR of 0.04 percent [67]. Later, Roddy and Stosz [63] conducted a statistical anal-
ysis of pores and predicted the performance of a pore-based automated fingerprint
system. The study demonstrated the efficacy of using pores, in addition to minutiae,
for improving the recognition performance.
Kryszczuk, et al. [40, 41] have conducted research to determine whether Level 3
features could compensate for decreased number of Level 2 features when attempting
to match with fingerprint fragments, given sufficiently high-resolution data. The ex-
periments are done on fingerprints acquired from a high resolution (2000 dpi) custom-
built scanner. The size of database used was small (12 genuine and 6 impostor com-
parisons). Pores and ridge structure are used in conjunction with Level 2 features for
comparison of fragmentary fingerprint images and comparison has been done based
on a geometric distance criterion. The authors have presented two hypotheses: i) as
the size of the fingerprint fragment decreases, or the number of minutiae decreases,
the usefulness of Level 3 data increases, and ii) given a sufficiently high resolution,
the same quantity of discriminative information could be extracted from fingerprint
fragments by using Level 3 data, as could be when considering Level 2 data extracted
from the complete fingerprint image.
Jain, et al. [29, 33] have proposed a hierarchical matching system that utilizes
all the three levels extracted from 1000 ppi fingerprint images. The level 3 features
(pores and ridge contours) are locally matched, in windows associated with matched
minutiae points, using the Iterative Closest Point (ICP) [13] algorithm. There is
a relative reduction of 20 percent in the EER when Level 3 features are employed
in combination with Level 1 and 2 features. This significant performance gain was
consistently observed across various quality fingerprint images. The iterative nature
2.5. FINGERPRINT MATCHING 41
of ICP algorithm makes the approach unsuitable for automated matching. Also, the
Level 3 matching (ICP algorithm) relies on an initial alignment based on Level 2
features and in case of distorted fingerprints such an alignment is not reliable.
In [70] Vatsa et al. have presented a score-level fusion technique that combines
level-2 and level-3 match scores to provide high accuracy. The match scores obtained
from Level 2 and Level 3 classifiers are first augmented with a quality score that
is quantitatively determined by applying redundant discrete wavelet transform to a
fingerprint image. The quality augmented match scores are then fused using Dezert-
Smarandache theory. The proposed fusion method provided better results than other
existing fusion techniques. The proposed algorithm is able to perform well even in
the presence of imprecise, inconsistent, and incomplete fingerprint information.
42
Chapter 3
PROPOSED HIERARCHICAL
MATCHER
Fingerprint Matching is the most important stage in fingerprint recognition pro-
cess. A fingerprint matching algorithm compares two sets of features originating from
two fingerprints and determines whether or not they represent the same finger. Fin-
gerprint matching is an extremely difficult problem, mainly due to the large intra-class
variations that exists in different impressions of the same finger. The main factors
responsible for these intra-class variations are [35]: i) Displacement and Rotation :
The same finger may be placed at different locations and at different orientation on
the sensor during different acquisitions resulting in a (global) translation and rotation
of the fingerprint area, ii) Partial overlap : Finger displacement and rotation often
cause part of the fingerprint area to fall outside the sensor’s “field of view”, resulting
in a smaller overlap between different impressions of the same finger, iii) Non-linear
distortion : The act of sensing maps the three-dimensional shape of a finger onto the
two-dimensional surface of the sensor. This mapping results in a non-linear distortion
43
in successive acquisitions of the same finger due to finger skin plasticity, iv) Pressure
and skin condition: The ridge structure of a finger would be accurately captured if
ridges of the part of the finger being imaged were in uniform contact with sensor sur-
face. However, finger pressure, dryness of the skin, skin disease, sweat, dirt, grease,
and humidity in the air all confound the situation, resulting in a non-uniform contact.
As a result, the acquired fingerprint images are very noisy, and v) Feature extraction
errors : The feature extraction algorithms are imperfect and often introduce measure-
ment errors. For example, in low-quality fingerprint images, the minutiae extraction
process may introduce a large number of spurious minutiae and may not be able to
detect all the true minutiae.
Chapter 2 has provided an overview of different fingerprint matching approaches.
The correlation based method requires the complete image to be stored (large tem-
plate sizes). The texture based methods are less accurate than minutiae based match-
ers since most regions in the fingerprint carry low textural content. Both types of
methods requires accurate alignment of fingerprints. The minutia based techniques
on the other hand are more accurate and they very closely resemble the manual ap-
proach as used by forensic experts. Studies [31], [70, 67, 29, 33] have shown that
by combining additional information in the form of texture features, level 3 features
with minutia based matcher, higher accuracy can be achieved. The minutia based
approaches like other approaches cannot give a high confidence match when the im-
ages are of poor quality or when there is a very small overlap i.e. very few minutia
points are available for matching. Level 3 features are known to carry discriminative
information and forensic examiners often make use of Level 3 features when insuffi-
cient minutia points are present. A new hierarchical matching system which utilizes
additional information in form of Level 3 features (pores and ridge contours) has
3.1. PREPROCESSING AND ENHANCEMENT 44
been proposed in this chapter. Figure (3.1) illustrates the architectural design of the
proposed hierarchical system. A similar architecture (hierarchical) was first proposed
by Jain et al. [29].
Figure 3.1: The proposed hierarchical matcher.
3.1 Preprocessing and Enhancement
To compensate for the variations in lighting, contrast and other inconsistencies
three preprocessing steps are used: Gaussian blur, sliding window contrast adjust-
ment, and histogram based intensity level correction. Gaussian blurring is used to
remove any noise introduced by the sensor. The lighting inconsistencies are ad-
justed by using sliding-window contrast adjustment on the Gaussian blurred image.
3.2. FEATURE EXTRACTION 45
To further enhance the ridges and valley a final intensity correction is made by us-
ing Histogram-based Intensity Level Adjustment. These techniques are described in
Chapter 2.
The preprocessed image is enhanced using a popular fingerprint enhancement
technique by Sharat et al.[16] which uses contextual filtering in fourier domain. The
technique is discussed in Chapter 2. The enhanced fingerprint image is not suitable
for extracting pores as the pore information is lost during enhancement. Thus for
pore extraction the preprocessed image is used.
3.2 Feature Extraction
The minutiae points are extracted from the enhanced image by using NIST’s 1
minutia extraction software (NFIS). The method first generates the image quality
maps by checking the regions with high curvature, low flow and low contrast. And
then, a binary representation of the fingerprint is constructed. Minutiae are generated
by comparing each pixel neighborhood with a family of minutiae templates. Finally,
spurious minutiae are removed by using a set of heuristic rules. The NFIS also counts
the neighbor ridges and assigns each minutia point a quality (in the range 0 to 100)
determined from image quality maps. The minutia representation generated by NFIS
consists of the location information, orientation, minutia type (bifurcation or ending)
and minutia quality. The proposed minutia matcher does not differentiate between
different minutiae types. This is because the minutia types are difficult to distinguish
when the applied finger pressure during acquisition varies. Figure 3.2 shows one such
example where the same minutiae extracted from two different impressions appears
1NIST stands for National Institute of Standards and Technology and is a federal technologyagency that develops and promotes measurement, standards, and technology.
3.3. HIERARCHICAL MATCHING 46
as a bifurcation in one image and as a ridge ending in other.
Figure 3.2: Effect of differential finger pressure([36]).
The Level 3 features are extracted only when the minutiae based matching fails
to match the query fingerprint image with the template image. The ridge contours
are extracted from the enhanced fingerprint image by using the approach proposed
by Jain et al. [29]. The extracted ridge contours by using this approach is shown
in Figure 3.3. For extracting pores the technique proposed in [23] by International
Biometric Group is used. Figure 3.2 show the extracted pores by using this approach.
The pore information thus extracted contains the location information and orientation
information of pores. The pore extraction and ridge contour extraction techniques
are described in Chapter 2.
3.3 Hierarchical Matching
Dealing with Non-Linear distortion is the major problem faced by fingerprint
matching algorithms. The effect of non-linear distortion is more pronounced when
global structures with global parameters are considered for matching. Localized
matching can minimize the effect of non-linear distortion. Throughout our approach
we have used localized matching in-order to tolerate the effects of non-linear distor-
tion. Also we have used rotation invariant structures (in case of pores and minutiae)
3.3. HIERARCHICAL MATCHING 47
(a) Sample 500 dpi fin-
gerprint
(b) enhanced fingerprint (c) Traced Ridge Contours
Figure 3.3: Ridge Contour extraction process in a image scanned with Cross MatchVerifier 300 scanner at 500 dpi.
(a) Sample 500 dpi finger-
(b) extracted pore
Figure 3.4: Pores extracted from a image scanned with Cross Match Verifier 300scanner at 500 dpi.
3.3. HIERARCHICAL MATCHING 48
and features (in case of ridge contours) for matching, thus at any stage any type of
alignment is not required.
Let T and Q be the template and query images which are to be matched. At first
the minutia-based matcher matches T and Q and returns the matching minutia pairs,
which is a measure of degree of similarity between the two representations (finger-
prints). Let S2 be the number of matched minutia pairs returned by the minutia-based
matcher. If S2 > τ2 (τ2 is a threshold whose value is chosen as 12), the two finger-
prints are considered “matched” and the matching terminates. The threshold τ2 is
set to be 12 because typically, a match consisting of 12 minutia points (the 12-point
guidelines) is considered as sufficient evidence for making positive identification in
many courts of law [56]. However, If S2 ≤ τ2 then the matching continues and Level 3
based matcher further matches the two fingerprints and returns the final match/non-
match decision. The matched minutiae at Level 2 are further examined and the Level
3 features in their neighborhood are matched.
3.3.1 Minutia-based Matcher
We propose a local structure based minutia matcher. The local structure (“star”)
is similar to the “kplet” used by Sharat et al [15]. It consists of a central minutia
point mi and its k neighboring minutia points (mi1,mi2, ....mik) within a local neigh-
borhood. The local neighborhood is defined by a region within distance dmax from
the central minutia mi. The k neighbors are chosen amongst all the minutiae within
dmax distance and from all four quadrants . Choosing neighboring minutia from
all four quadrants captures local structure information accurately and during exper-
iments we found that it is more effective than choosing k nearest neighbors of the
central minutia. Figure 3.5 demonstrates a star with k = 6 neighbors. There exists a
3.3. HIERARCHICAL MATCHING 49
trade-off while choosing the size of local neighborhood (dmax). dmax should be small
enough to tolerate distortion, however keeping dmax too small might not accumulate
enough evidence and hence may lead to false matches. k represents the number of
neighboring minutia points which forms the “star” and its value can vary between
a predefined range bounded by kmin and kmax i.e. kmin ≤ k ≤ kmax. If for a given
minutia k < kmin then that minutia is not considered for matching. On the other
hand, if k > kmax then kmax neighbors are considered. The matching is performed in
two stages:
Figure 3.5: The schematic description of a “star” with k = 6 neighbors. The neighborsare chosen from all the four quadrants
1. Local matching : the “stars” corresponding to central minutia points are matched.
At the end of this stage pairs of matching minutia points are returned.
3.3. HIERARCHICAL MATCHING 50
2. Consolidation step : During this step the matched minutia pairs are further
checked for consistency at global level.
3.3.1.1 Local Structure Matching
During this stage the “stars” in Q are matched with the “stars” in T . If two
stars are found to be matching then the minutiae representing their centers are also
considered to be matching. A star associated with a central minutia m and with k
neighboring minutiae is represented by a k element feature vector vm. Each element
vmi ∈ vm is a 3-tuple (ri, θi, φi) representing the radial parameters of ith neighboring
minutia point with respect to the central minutia m.
vm = (r1, θ1, φ1), ....., (rk, θk, φk)
where ri is the distance of ith neighboring minutia from m, θi is the orientation
of ith neighboring minutia with respect to m’s orientation and φi is the direction
difference between m’s orientation and the direction of the edge connecting m and
mi. Throughout this chapter we shall refer to a “star” by its feature vector. The
“star” structure is invariant to rotation and translation and thus any type of alignment
is not required. The local structure matching problem thus reduces to matching two
ordered sequences
vm = (vm1, ....vmk) = (r1, θ1, φ1), .., (rk, θk, φk) and
vm′ = (vm′1, ....vm′k′ ) = (r′
1, θ′
1, φ′
1), .., (r′
k, θ′
k, φ′
k)
where vm represents a star in T and vm′ represents a star in Q.
The edit distance is used as a similarity measure between two “stars”. Edit
Distance between two sequences indicates a measure of similarity between them. The
3.3. HIERARCHICAL MATCHING 51
edit distance is capable of capturing the impression variations (deletion of genuine
minutiae, insertion of spurious minutiae, and perturbation of minutia) associated with
different impressions of same finger. The algorithm considers a query minutiae (in
the query “star”) and a template minutiae (in the template “star”) to be a mismatch
if their attributes (radius, radial angle and minutia direction with respect to the
central minutiae) are not within a tolerance window. A penalty is associated with
every match and mismatch (Equation 3.2). The penalty associated with a match is
proportional to the disparity in the values of the attributes related to the two matched
minutiae. On the other hand, a predefined huge penalty is associated with every mis-
match. The sum of all penalties associated with match and mismatch between two
sequences defines the edit distance. Among several possible sets of transformation of
the query sequence into the template sequence, the string matching algorithm chooses
the transform associated with the minimum cost (edit distance) based on dynamic
programming.
An unmatched “star”, vm′ ∈ Q is matched with all the unmatched “stars” in
T and the “star” which returns the minimum edit distance is chosen as the best
match. Given two “stars”, vm = (vm1, ....vmk) ∈ T and vm′ = (vm′1, ....vm′k′ ) ∈ Q, of
lengths k and k′, respectively, the edit distance, D(k, k
′), is recursively defined with
the following equations :
D(i, j) =
0 if i = 0, or j = 0
min
D(i− 1, j) + Ω
D(i, j − 1) + Ω
D(i− 1, j − 1) + w(i, j)
0 < i ≤ k, and 0 < j ≤ k′
(3.1)
3.3. HIERARCHICAL MATCHING 52
w(i, j) =
α∆r + β∆θ + γ∆φ if∆r < tr, ∆θ < tθ and ∆φ < tφ
Ω otherwise
(3.2)
where, ∆r = (|ri− r′j|/min(ri, r
′j)), ∆θ = |θi− θ
′j| and ∆φ = |φi−φ
′j|; α, β and γ are
weights associated with each component, respectively; tr, tθ and tφ are the parameters
of the tolerance window ; and Ω is a pre-specified penalty for mismatch.
Once the best match for vm′ is found, say vn = (vn1, ....vnl), then a dynamic
programming based sequence alignment technique ([53]) is used to find the number
of matching edges between vm and vn′ . Sequence alignment tries to find the best
alignment between an entire sequence S1 and another entire sequence S2. Consider
two sequences S1 = [GAATTCAGTTA] and S2 = [GGATCGA]. If required gaps( )
are inserted in S1 and T1 and finally the aligned strings have same length. A penalty
is associated with each gap and mismatch and a reward is associated with each match
in the final alignment. Based on these “rewards” and “penalty” a final alignment cost
is calculated. The optimal global alignment is one having the maximum cost.
S1′= GAATTCAGTTA
S2′= GGA TC G A
There are two phases in the alignment process:
• Forward phase: The optimal alignment cost for every substring is computed
(by using a recurrence relation) and stored in C(a (k′+ 1)× (l+ 1) dimensional
matrix). C[i, j] denotes the best cost for aligning two subsequences v′m[1...j]
(1 ≤ j ≤ k′) and vn[1...i] (1 ≤ i ≤ l).The base conditions are:
C[0, 0] = 0, C[0, j] = C[0, j − 1] + gap score
3.3. HIERARCHICAL MATCHING 53
and C[i, 0] = C[i− 1, 0] + gap score
The recurrence relation is:
C[i, j] = max
C[i− 1, j − 1] + score(v
′m[j], vn[i]),
C[i− 1, j] + gap score,
C[i, j − 1] + gap score
(3.3)
score(v′
m[j], vn[i]) =
match score if∆r < tr, ∆θ < tθ and ∆φ < tφ
nonmatch score otherwise
where ∆r = (|ri − r
′j|/min(ri, r
′j)), ∆θ = |θi − θ
′j| and ∆φ = |φi − φ
′j| ;
match score is the reward when a match is found , nonmatch score and gap score
are penalties when mismatch is found and gap is introduced, respectively; tr, tθ
and tφ are the parameters of the bounding box.
In other words if we have an optimal alignment up to vm′ [1..j] and vn[1..i] then
there are only three possibilities of what could happen next: i) v′m[j] and vn[i]
match, ii) a gap is introduced in vn, and iii) a gap is introduced in vm′ . The use
of ”gaps” takes care of the cases when there are spurious and missing minutiae.
• Traceback : This step constructs the optimal alignment by tracing back in the
matrix C any path from C[l, k′] to C[0, 0] that maximizes the cost.
Let Qmap be the Hash Map containing the mapping of minutiae m ∈ Q and their
corresponding stars vm and Tmap be the Hash Map containing the mapping of minutiae
m′ ∈ T and their corresponding stars vm′ .
Qmap = (m1, vm1), ....., (mx, vmx)
Tmap = (m′
1, vm′1), ....., (m
′
y, vm′y)
3.3. HIERARCHICAL MATCHING 54
where x, y denote the number of minutia points which are considered for matching
in Q, T respectively. The matching technique is summarized in algorithm 1.
Algorithm 1 Minutia-based Matching
Require: Qmap and Tmap1: let M be the Hash Map containing matched minutia pairs (initially empty).
2: for all (m, vm) ∈ Qmap do
3: for all (m′, vm′ ) ∈ Tmap do
4: if m′
is already matched then
5: continue
6: end if
7: calculate edit distance between vm and v′m
8: end for
9: choose vm′ having minimum edit distance with vm10: find the number of matching edges (nummatch) between vm and vm′ by using
a sequence alignment technique.
11: if nummatch > minmatch then
12: Insert (m,m′) in M minmatch is a predefined constant
13: end if
14: end for
15: return M
3.3.1.2 Consolidation step
During this step the matched minutiae pairs are validated at global level. The
matched minutiae should be consistent in terms of global characteristics like orien-
tation. Thus all valid matches must have similar orientation differences if they come
from same finger. We approximate the orientation difference between fingerprints by
plotting a histogram with bins representing orientation differences and each bin 36
degree wide. The bin with maximum height and its immediate neighboring bins are
considered. The matched minutiae pairs in other bins are removed from the matched
list M . Let N2 be the number of matched pairs in M , the Level 2 match score (S2)
3.3. HIERARCHICAL MATCHING 55
is calculated using a very famous approach:
S2 =2 ∗N2
NQ +NT
(3.4)
where NQ and NT are the number of minutia points in Q and T , respectively. If
N2 ≥ 12, the two fingerprints (Q and T ) are considered to be from same finger,
otherwise the matching continues to Level 3.
3.3.2 Level 3 features based Matcher
The matched minutiae pairs at Level 2 are further examined and Level 3 features
in their local neighborhood are compared. Thus for a given pair of matched minutiae,
we compare the level 3 features in their neighborhood and the minutia correspondence
is recomputed based on the agreement of Level 3 features. The ridge contours and
pores are separately compared in the neighborhood. While comparing the Level 3
features, we need to consider the fact that the detected features will vary in different
impressions of the same finger, due to the degradation of image quality (because
of noise, skin deformation etc.). Thus, a decision based OR rule is used to verify
the minutiae correspondences, i.e if either of pores or ridge contours agree then the
minutia correspondence is verified.
Localized matching is done to tolerate the effect of non-linear distortion. In [40]
it has been observed that given a sufficiently high resolution, the same quantity of
discriminative information could be extracted from fingerprint fragments by using
Level 3 data, as could be when considering Level 2 data extracted from the complete
fingerprint image. Thus by using localized window sufficient information can be
obtained.
3.3. HIERARCHICAL MATCHING 56
3.3.2.1 Pore Matching
Each pore is represented by a 3-tuple (x, y, θ), where x,y denote its location and
θ is the direction of the ridge, at the location where it lies. [70], [67] and [29] have
used only location information of pores for matching. Similar to minutiae, pores
maintain the same relative orientation to other pores within a fingerprint between
different impressions. In a fingerprint pores are distributed over ridges and associating
direction information with pores provides an additional information for matching.
We propose a novel approach for matching pores. Pores within a circular region
around the matched minutia points, obtained from minutia-based matcher, are used
for matching. Given a minutia m as the origin, the pores in a circular region Cm
around it are arranged in order of radial distance first and then pores with same radial
distance are ordered by the increasing radial angle (the angle between the direction
of line segment joining a pore with the central minutia and the central minutia’s
orientation). Suppose (m,m′) is a matching minutia pair, the pore information related
to them can be represented by mp, m′p respectively:
mp = (p1, ..., pi) = ((r1, θ1, φ1), ..., (ri, θi, φi))
m′
p = (p′
1, ..., p′
j) = ((r′
1, θ′
1, φ′
1), ..., (r′
j, θ′j, φ
′
j))
where i,j are the number of pores in Cm, Cm′ respectively; rk, θk, φk denotes the radial
distance, orientation difference, radial angle of kth pore with the central minutia mp;
r′
k, θ′
k and φ′
k have similar relationship with m′p.
A dynamic programming based approach is used to find the edit distance, dpores
between the ordered pore strings associated with two matching minutiae. The same
approach is used for finding the edit distance which is used in case of minutiae. The
edit distance gives a measure of similarity and is tolerant to spurious and missing
3.3. HIERARCHICAL MATCHING 57
pores.
3.3.2.2 RidgeContour Matching
Ridge contour are observed to be more reliable than pores at low resolution (500
dpi). Figure (3.6) shows two fingerprints at 500 ppi having prominent ridge structure.
However, pores are not found to be consistent. We concentrate on localized matching
for tolerating non-linear distortion. Thus ridge contours are matched in a w × w
window around the matched minutiae pairs. Non-linear distortion is a major concern
when matching smaller structures, such as points along ridges (ridge contours). Lo-
calized matching can only tolerate non-linear distortion to an extent, but to deal with
the adverse effects of distortion (caused by skin plasticity) more efforts are needed.
Infact, the matching technique should be robust to non-linear distortion.
(a) (b)
Figure 3.6: Two impressions of same fingerprint at 500 dpi.
A novel approach for ridge contour matching is proposed which uses Zernike mo-
3.3. HIERARCHICAL MATCHING 58
ments as shape features and Local Binary Patterns (LBP) as texture features. The
proposed approach combines the Zernike moments and LBP to form a set of features
suitable for texture shape features.
Shape Feature extraction using Zernike Moments
Zernike moments costitute a powerful shape descriptor in terms of robustness and
description capability and have been employed in a wide range of applications like
character recognition, object recognition, palm-print recognition, iris recognition and
face recognition [71, 8, 48, 17]. This shape descriptor has proved its superiority over
other moment functions [12, 43], regarding to its description capability and robustness
to noise or deformations. Zernike moments are based on a set of complex polynomials
(Zernike polynomials) that form a complete orthogonal set over the interior of the
unit circle [72]. The Zernike function of order p and repetition q are defined in the
polar coordinate system (r, θ) as
Vpq(r, θ) = Rpq(r).eiqθ (3.5)
where i =√−1 and Rpq is the orthogonal real valued radial polynomial defined as
Rpq(r) =
(p−|q|)/2∑n=0
(−1)n(p− n)!
n!(p+|q|2− n)!(p−|q|
2− n)!
rp−2n (3.6)
where p− |q| is even and |q| ≤ p
Zernike moments of an image are the projections of the image function onto these
orthogonal basis functions. The Zernike moments of order p and repetition q for an
image intensity function f(r, θ) over the polar coordinate space is:
Zpq =p+ 1
Π
∫ 2Π
0
∫ 1
0
V ∗pq(r, θ)f(r, θ)rdrdθ (3.7)
where V ∗pq denotes the complex conjugate of Vpq. If N is the number of pixels along
3.3. HIERARCHICAL MATCHING 59
each axis of the image, then Equation (3.7) can be written in the discrete form as
Zpq =p+ 1
Π(N − 1)2
N∑x=1
N∑y=1
V ∗pq(r, θ)f(x, y) (3.8)
where r =√
(x2 + y2)/N , and θ = tan−1(y/x). The polar form of Zernike moments
suggests a square-to-circular image transformation [49], so that the Zernike polyno-
mials need to be computed only once for all pixels mapped to the same circle. This
transformation from square to circular region is shown in Figure 3.7. If the image
coordinate system (x,y) is defined with the origin at the center of the square pixel
grid, then the pixel coordinates of the transformed circular image can be represented
by γ, ξ where γ denotes the radius of the circle and ξ the position index of the pixel
on the circle. The normalized polar coordinates r, θ of the pixel (γ, ξ) are given by
r = 2γ/N, θ = πξ/(4γ) (3.9)
Figure 3.7: Schematic of square to circular transformation
From Equation 3.7 and 3.5, Zernike moments of a pattern rotated by an angle a
around its center of mass are given in polar coordinates as
Zapq = Zpqe
iqa (3.10)
3.3. HIERARCHICAL MATCHING 60
From Equation 3.10 we have |Zpq| = |Zpqeiqa|, thus magnitude of Zernike moments
are invariant to rotation. Due to the property of orthogonality the contribution of
each moment is unique and independent. The Zernike moments are calculated in a
w × w window around the matched minutia pairs. The Zernike feature vector with
moment order n and repetition m is given by:
fZernike = [|ZM00|, |ZM01|, ..., |ZM(n−1)(m−1)|] (3.11)
where ZMmn denotes Zernike moments of order n and repetition m.
Texture Feature Extraction using Local Binary Patterns
Local Binary Patterns (LBP) [55] is a very simple, yet efficient, multi-resolution
approach to gray-scale and rotation invariant texture description and has been ex-
tensively used as a state of the art feature extractor in Face Recognition [69]. The
basic LBP operator, introduced by Ojala et al. [55], labels the pixels of an image by
thresholding the n× n (n = 3) neighborhood of each pixel with the center value and
considering the result as a binary number. Then the histogram of the labels can be
used as a texture descriptor. The basic LBP operator has been illustrated in Figure
3.8. Rotation invariance can be achieved by selecting the minimum possible value of
the binary pattern as the label.
Figure 3.8: The basic LBP operator
Given an image f(x,y), a histogram of the labeled image fl(x, y) can be defined as
3.3. HIERARCHICAL MATCHING 61
Hi =∑x,y
Ifl(x, y) = i, i = 0, ..., n− 1 (3.12)
in which n is the number of different labels produced by the LBP operator and
IA =
1 A is true,
0 A is false.
(3.13)
This histogram contains information about the distribution of the local micro-
patterns, such as edges, spots and flat areas, over the whole image. The rotation
invariant LBP histogram, fLBP is calculated from a w × w symmetric (square) local
region around both the matched minutia pairs.
Fusion of LBP-Zernike features
The LBP-Zernike feature vector is obtained by combining the LBP and Zernike
features vectors as follows:
• Normalize the LBP feature and Zernike Feature respectively by z score normal-
ization.
• LBP-Zernike feature is given by:
fLBP−Zernike = fLBPU fzernike (3.14)
Finally the similarity between two minutia points m,m′
is the Euclidean distance
(dridges) between their respective fLBP−Zernike features.
3.3.2.3 Fusion of Level 2 and Level 3 features
The distance measure obtained by matching pores and ridges in a local neigh-
borhood around matched minutia pairs are used to reverify the minutia correspon-
dence. Two heuristically determined thresholds τpores and τridges are used for deciding
3.3. HIERARCHICAL MATCHING 62
whether the pores and ridge contours, respectively, from two localized regions, agree
or not. Thus, a minutia pair m,m′
matched at Level 2, is considered matched at
Level 3, if either of the following two conditions hold.
dridges < τridges
dpores < τpores
Thus at Level 3 minutia correspondences are recomputed and the number of matched
minutiae N3 at level 3 is then used to calculate Level 3 match score
S3 =2 ∗N3
NQ +NT
(3.15)
where NQ and NT are the number of minutia points in Q and T , respectively. If
S3 > τ3 (τ3 is heuristically determined threshold) then the fingerprints are considered
to be matched other wise they are declared as non-matched fingerprints.
63
Chapter 4
EXPERIMENTAL RESULTS
This chapter describes the various experiments conducted and discusses the ob-
tained results. The proposed approach has been evaluated on three different databases.
4.1 Database
The evaluations and testing of the proposed approach has been done on three di-
verse fingerprint databases: Neurotechnology Database [3], FVC2004 [5] DB3 Database
and FVC2006 [1] DB2 Database. They are explained in detail below.
• Neurotechnology Database: The database consists of 51 different fingers with
8 impressions per finger resulting in 408 images. The fingerprint samples are
scanned with an optical scanner (Cross Match Verifier 300) at 500 dpi. The
sample images from this database are shown in Figure 4.1.
• FVC 2004 DB3 Database: The database consists of 100 different fingers with
8 impressions per finger resulting in 800 images. The fingerprints are scanned
with a thermal sweeping sensor (FingerChip FCD4B14CB by Atmel) at 512 dpi.
4.1. DATABASE 64
Figure 4.1: Sample images from Neurotechnology Database
The FVC 2004 databases are known to be difficult because of the perturbations
which were deliberately introduced during database collection [5]. The sample
images from this database are shown in Figure 4.2.
Figure 4.2: Sample images from FVC 2004, DB3 Database
• FVC 2006 DB2 Database: The database is 140 fingers wide and 12 samples per
4.2. PERFORMANCE EVALUATION 65
finger in depth (total 1680 fingerprint). The fingerprints were scanned with an
optical sensor at 569 dpi. A heterogeneous population which includes manual
workers and elderly people was used to create the database [1]. The volunteers
were simply asked to put their fingers naturally on the acquisition device, but
no constraints were enforced to guarantee a minimum quality in the acquired
images. Figure 4.3 displays the sample images from this database.
Figure 4.3: Sample images from FVC 2006, DB2 Database
4.2 Performance Evaluation
• Each sample in a Database is matched against the remaining samples of the same
finger to compute the False Rejection Rate(FRR). The FRR is the fraction of
4.2. PERFORMANCE EVALUATION 66
genuine fingerprints which are rejected and is calculated as follows
FRR =Number of genuine fingerprints rejected
Total number of genuine tests(4.1)
If the fingerprint g is matched to fingerprint h, the symmetric match (i.e., h
against g) is not executed to avoid correlation in the scores. The total number
of genuine tests are 1428 ,2800 and 9240 for Neurotechnology database, FVC
2004 Database and FVC 2006 Database respectively.
• For all databases the first sample of each finger is matched against the first
sample of the remaining fingers in the same database to compute the False
Acceptance Rate (FAR). The FAR is the fraction of impostor fingerprints which
are accepted and is calculated as follows
FAR =Number of impostor fingerprints accepted
Total number of impostor tests(4.2)
If the matching g is performed, the symmetric one (i.e., h against g) is not
executed to avoid correlation. The total number of false acceptance tests are
1275, 4950 and 9730 for the Neurotechnology database, FVC 2004 Database
and FVC 2006 Database respectively.
• For each algorithm and for each database, EER (equal-error-rate) and ROC(t)1
curve are used as performance indicators. The ROC(t) curve is a plot between
FAR and FRR for different values of the threshold, t. Given two algorithms
and their ROC curves, the better algorithm is one whose ROC curve is lower
than other’s ROC curve over a suitably large range of threshold values t. The
EER is the rate at which the FAR is equal to the FRR. The lower the EER,
1t denotes the acceptance threshold
4.3. EXPERIMENT 1 67
the better is the system. For each algorithm the Impostor and Genuine score
distributions are also reported.
4.3 Experiment 1
The first stage of the hierarchical matcher is the minutiae matching stage and the
performance of the next stage is dependent upon this stage. In this experiment the
performance of proposed minutia matcher is compared with a very popular minutiae
matching approach by Sharat et al. [15]. Sharat et al. have used a graph based
matching, Coupled BFS (CBFS) to match local minutia structures called Kplets.
Figure 4.4, 4.5 and 4.6 compare the performance of proposed minutiae matcher with
the CBFS-Kplet based matcher on the three databases mentioned above. From the
graphs it is evident that the proposed minutiae matcher performs better than CBFS-
Kplet based approach on all the three databases. Table 4.3 shows the performance
gain, in terms of EER, of the proposed minutiae matcher over the CBFS-Kplet based
matcher.
Database Proposed minutia CBFS-Kplet based % improvement in EER (%)matcher (EER %) matcher (EER %)
Neurotechnology 1.76 3.78 53.44FVC 2004, DB3 10.20 12.70 19.68FVC 2006, DB2 2.44 3.80 35.79
Table 4.1: Equal Error Rate (EER) comparison between proposed minutia matcherand CBFS-Kplet based matcher.
4.4. EXPERIMENT 2 68
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
CBFS Kplet, EER = 3.78%minutia matcher, EER=1.76%
Figure 4.4: ROC curves for proposed minutia matcher and CBFS-Kplet basedmatcher, on Neurotechnology Database
4.4 Experiment 2
This experiment compares the capacity of both LBP and Zernike moments (ZM)
to retrieve information from ridge contours. Moment order (n) and repetition (m)
are significant parameters when ZM are used as features. With high order, ZM can
carry more fine details of an image but become more susceptible to noise. After
balancing the computational complexity and retrieval performance, the moments of
order n = 4, 8, 12, 16 are considered in our experiment.
The ROC curves for the three databases are shown in Figure 4.7, 4.8 and 4.9. The
best performance achieved by using LBP, ZM and combination of both LBP and ZM
for each database is shown in the graphs. From the graphs it is evident that there
is performance gain (in terms of EER) when ridge information is used in matching.
4.4. EXPERIMENT 2 69
0
5
10
15
20
25
30
35
40
0 5 10 15 20 25 30 35 40
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
minutia matcher (EER=10.20%)CBFS Kplet (EER = 12.70%)
Figure 4.5: ROC curves for proposed minutia matcher and CBFS-Kplet basedmatcher, on FVC 2004, DB3 Database
The LBP performs much better than both ZM and even LBP-ZM combined. Also
the computational cost of LBP operator is minimal while that for ZM is very high
and it increases as the order of ZM increases. The timing analysis is shown in Table
4.3.
4.5. EXPERIMENT 3 70
0
2
4
6
8
10
0 2 4 6 8 10
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
CBFS Kplet (EER = 3.80%)minutia matcher (EER=2.44%)
Figure 4.6: ROC curves for proposed minutia matcher and CBFS-Kplet basedmatcher, on FVC 2006, DB2 Database
4.5 Experiment 3
This experiment evaluates the proposed hierarchical matching algorithm on the
above mentioned databases. Of the three databases only Neurotechnology database
was found to be suitable for extracting pores. In the other two databases (FVC 2004
DB3 and FVC 2006 DB2) only ridge contours have been used as Level 3 Features.
The ROC curves for the three databases are shown in Figure 4.10, 4.11 and 4.12.
Each figure compares the ROC curve of the proposed minutia matcher with the
proposed hierarchical matcher. As expected, the hierarchical matcher performs better
than the minutia matcher, for all the three databases. The improvement in EER is
shown in Table 4.5.
4.5. EXPERIMENT 3 71
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
minutia matcher, EER=1.76%hierarchical matcher (minutia + LBP), EER=1.05%
hierarchical matcher (minutia + LBP + ZM(4)), EER=1.54%hierarchical matcher (minutia + ZM(4)), EER=1.72%
Figure 4.7: ROC curves comparing the performance of LBP and ZM (Neurotechnol-ogy Database)
Database Proposed minutia Proposed hierarchical matcher (EER %)matcher (EER %) minutia minutia minutia, pores
& ridges & pores & ridgesNeurotechnology 1.76 1.05 3.77 1.05FVC 2004, DB3 10.20 8.80 - -FVC 2006, DB2 2.44 2.40 - -
Table 4.2: Equal Error Rate (EER) comparison between proposed minutia matcherand proposed hierarchical matcher.
4.5. EXPERIMENT 3 72
0
5
10
15
20
0 5 10 15 20
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
minutia matcher, EER=10.20%hierarchical matcher (minutia + ridges(LBP)), EER=8.80%
hierarchical matcher (minutia + ridges(ZM16)), EER=9.60%hierarchical matcher (minutia + ridges (LBP+ZM12)), EER=9.61%
Figure 4.8: ROC curves comparing the performance of LBP and ZM (FVC 2004, DB3Database)
4.5. EXPERIMENT 3 73
100
101
10-2 10-1 100 101
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
minutia matcher, EER=2.44%hierarchical matcher (minutia + ridges(LBP)), EER=2.40%
hierarchical matcher (minutia + ridges(ZM16)), EER=2.45%hierarchical matcher (minutia + ridges (LBP + ZM16)), EER=2.44%
Figure 4.9: ROC curves comparing the performance of LBP and ZM (FVC 2006, DB2Database)
4.5. EXPERIMENT 3 74
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
minutia matcher, EER=1.76%hierarchical matcher (minutia + ridges), EER=1.05%hierarchical matcher (minutia+ pores), EER=3.77%
hierarchical matcher (minutia+pores+ridges), EER=1.05%
Figure 4.10: ROC curves for the minutia matcher and hierarchical matcher on Neu-rotechnology Database
4.5. EXPERIMENT 3 75
0
5
10
15
20
25
30
35
40
0 5 10 15 20 25 30 35 40
Fal
se R
ejec
tion
Rat
e (F
RR
) %
False Acceptance Rate (FAR) %
minutia matcher (EER=10.2%) hierarchical matcher (EER=8.80%)
Figure 4.11: ROC curves for the minutia matcher and hierarchical matcher on FVC2004, DB3 Database
4.5. EXPERIMENT 3 76
100
101
10-2 10-1 100 101
Fal
se R
ejec
tion
Rat
e (F
RR
)%
False Acceptance Rate (FAR)%
minutia matcher (EER=2.44%) hierarchical matcher (EER=2.40%)
Figure 4.12: ROC curves for the minutia matcher and hierarchical matcher on FVC2006, DB2 Database
4.6. TIMING ANALYSIS 77
The hierarchical matcher results in a performance gain (in terms of EER) of
∼ 40%, ∼ 14% and ∼ 2% for Neurotechnology, FVC 2004 DB3, and FVC 2006
DB2 database, respectively. From Figure 4.10, it is observed that the contribution
of “pores” in improving(lowering) the EER is insignificant. The hierarchical matcher
performs better when ridges are used along with minutiae as compared to when
pores are used along with minutiae. In-fact, the hierarchical matcher performs with
comparable accuracy when ridges are considered at Level 3 as compared to when
ridges and pores both are considered at Level 3.
The lowering of EER can be attributed to the fact that both FAR and FRR are
lowered when a new threshold (τ3) is chosen at Level 3. This can be observed from the
distribution of - number of matched minutiae for genuine and impostor cases. Figure
4.13, 4.14 and 4.15 compares the number of minutiae matched by minutia matcher
and hierarchical matcher, for both impostor and genuine cases, for Neurotechnology,
FVC 2004 DB3, and FVC 2006 DB2 databases, respectively. From all the graphs it
is evident that, matching at Level 3 causes the number of matched minutia pairs to
decrease significantly for impostor cases when compared to that for genuine cases.
4.6 Timing Analysis
The above experiments show that the use of Level 3 Features provide additional
information which can be used to lower the error rates. But using additional features
also requires additional time. And if the time required is very large then the approach
might not be suitable for automatic matching. In this experiment we evaluate the
time required for matching Level 3 features. Table 4.3 provides a comparison of the
matching time for Level 3 Features. The time required is least when LBP is used
4.6. TIMING ANALYSIS 78
0 1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Fre
quen
cy %
Number of Matching Minutiae
# of matches for genuine (minutiae matcher)# of matches for impostor (minutiae matcher)
# of matches for genuine (hierarchical matcher)# of matches for impostor (hierarchical matcher)
Figure 4.13: Distribution of number of matched minutiae, for Genuine and Impostorcases, on Neurotechnology Database
4.6. TIMING ANALYSIS 79
0
5
10
15
20
25
30
35
40
45
50
55
60
65
0 5 10 15
Fre
quen
cy %
Number of Matching Minutiae
# of matches for genuine (minutiae matcher)# of matches for impostor (minutiae matcher)
# of matches for genuine (hierarchical matcher)# of matches for impostor (hierarchical matcher)
Figure 4.14: Distribution of number of matched minutiae, for Genuine and Impostorcases, on FVC 2004, DB3 Database
4.6. TIMING ANALYSIS 80
0
10
20
30
40
50
60
70
0 2 4 6 8 10 12 14
Fre
quen
cy %
Number of Matching Minutiae
# of matches for genuine (minutiae matcher)# of matches for impostor (minutiae matcher)
# of matches for genuine (hierarchical matcher)# of matches for impostor (hierarchical matcher)
Figure 4.15: Distribution of number of matched minutiae, for Genuine and Impostorcases, on FVC 2006, DB2 Database
4.6. TIMING ANALYSIS 81
for ridge matching. The LBP operator is well known for its fast, robust and efficient
feature extraction technique. Also, the time complexity for matching ridges increases
as the order of Zernike moments increases. From this experiment we can conclude
that LBP and low order Zernike Moments are suitable for matching as far as the time
for match is concerned.
In [29] Jain et al. have proposed a hierarchical matching system which uses
an iterative closest point algorithm (ICP) [13] for matching Level 3 features. The
iterative nature of ICP makes the use of this technique unsuitable for automatic
matching. Jain et al. have reported an average time of ∼ 45 seconds for matching
Level 3 features. The system configuration used by Jain et al. is comparable to
the system configuration used in our experiments. But still a comparison cannot be
made between the match time for proposed approach and the approach by Jain et al.,
since the database used are different. However, it gives an overview of the average
match time. The system configuration on which the experiments were done is 1.6
GHz Pentium 4 CPU and 1 GB of RAM.
Table 4.3: Comparison of matching time for Level 3 Features.
Matching Technique Average time for matching Level 3Features, in seconds
Proposed Approach (ZM4) 1.28Proposed Approach (ZM8) 4.73Proposed Approach (ZM12) 10.47Proposed Approach (ZM16) 19.01Proposed Approach (LBP) 1.21Proposed Approach (pores) 0.1
82
Chapter 5
CONCLUSION & FUTURE
WORK
5.1 Conclusion
A new Hierarchical Fingerprint matcher which utilizes Level 3 features(pores &
ridges) in conjunction Level 2 Features (minutiae) has been proposed in this thesis.
The novelty lies in the matching technique used for matching Level 2 features (minu-
tiae) and Level 3 features (pores and ridge contours). We have concentrated on 500
ppi fingerprints as the Federal Bureau of Investigation’s (FBI) standard of fingerprint
resolution for Automated Fingerprint Identification System (AFIS) is 500 ppi. The
testing of our approach was done on three diverse fingerprint databases and based on
our observations and obtained results the following conclusions can be made:
• The use of Level 3 features (pores and ridge contours) provide complementary
information which can be used along with Level 2 features (minutiae) to lower
the error rates, namely FAR and FRR.
5.1. CONCLUSION 83
• At 500 ppi, the ridge contours were observed to be more reliable and prominent
features than the pores and the results obtained from our experiments also
justify this.
• The Local binary Patterns are effective means of representing the shape and
texture information of ridge contours even in poor quality images.
• The proposed hierarchical matcher has a matching time suitable for automated
fingerprint verification systems.
• Finally, We have tried to overcome the real challenges in fingerprint matching
namely, non-linear distortion, small overlap between query and template images,
error and noise introduced by feature extraction algorithms, error introduced
in registration and due to unfavorable skin conditions. Localized matching was
used for matching all feature types, in-order to minimize the effects of distor-
tion. Also, rotationally invariant structures (pores and minutia) and features
(ridge contours) are used and as a result any type of alignment (registration)
is not required at any stage. The use of Level 3 features is beneficial in de-
ciding match/nonmatch, with increased accuracy, in case of fingerprints with
small overlap. The pores and minutiae are matched using an elastic string
matching algorithm which is capable of overcoming the errors introduced by
feature extraction algorithms. The experiments are conducted on standard
public databases- FVC 2004 DB3 (containing manually perturbed fingerprints),
FVC 2006 DB2 (containing fingerprints from a heterogeneous population which
included manual workers and elderly people) and Neurotechnology Database
(a relatively good quality database). The three databases captures most of
the problems that we have mentioned. And the fact that our proposed minu-
5.2. FUTURE WORK 84
tia matcher performs better than a popular CBFS-Kplet [15] based minutia
matcher, proves that our approach is capable of overcoming the mentioned
problems to a large extent.
5.2 Future Work
The proposed hierarchical matcher does not require registration or alignment of
fingerprints at any stage. This makes it suitable for fingerprint fragment (partial
fingerprint) comparison, as the major difficulty in partial fingerprint comparison is
that it is difficult to align the partial fragment with a template fingerprint. Also a
partial fingerprint has insufficient information if only minutiae features are considered,
thus using additional information in form of Level 3 features (especially ridge contour
information) can provide additional information.
85
Bibliography
[1] Fourth International Fingerprint Verification Competition. http://bias.csr.
unibo.it/fvc2006.
[2] The History of Fingerprints. http://www.onin.com/fp/fphistory.html.
[3] Neurotechnology Sample Database. http://www.neurotechnology.com/
download.html.
[4] Ridges and Furrows - history and science of fingerprint identification, technologyand legal issues. http://ridgesandfurrows.homestead.com/fingerprint.
html.
[5] FVC2004 - Third International Fingerprint Verification Competition. http:
//bias.csr.unibo.it/fvc2004, 2004.
[6] CDEFFS: The ANSI/NIST Committee to Define an Extended Fingerprint Fea-ture Set. http://fingerprint.nist.gov/standard/cdeffs/index.html, Oc-tober 2006.
[7] The Thin Blue Line. http://www.policensw.com/info/fingerprints/
finger06.html, October 2006.
[8] E. M. Arvacheh and H. R. Tizhoosh. Pattern Analysis using Zernike Moments.In Instrumentation and Measurement Technology Conference, pages 1574–1578,May 2005.
[9] D.R. Ashbaugh. Quantitative-Qualitative Friction Ridge Analysis: An Introduc-tion to Basic and Advanced Ridgeology. CRC Press, 1999.
[10] A.M. Bazen and S.H. Gerez. Segmentation of fingerprint images. In ProR-ISC 2001 Workshop on Circuits, Systems and Signal Processing, pages 276–280.Citeseer, 2001.
[11] A.M. Bazen, G.T.B. Verwaaijen, S.H. Gerez, L.P.J. Veelenturf, and B.J. van derZwaag. A Correlation-Based Fingerprint Verification System. In Proc. of 11th
BIBLIOGRAPHY 86
Annual Workshop on Circuits Systems and Signal Processing (ProRISC), pages205–213, 2000.
[12] S. O. Belkasim, M. Shridhar, and M. Ahmadi. Pattern recognition with momentinvariants: a comparative study and new results. Pattern Recogn., 24(12):1117–1138, 1991.
[13] Paul J. Besl and Neil D. McKay. A Method for Registration of 3-D Shapes.IEEE Trans. Pattern Anal. Mach. Intell., 14(2):239–256, 1992.
[14] Ruud Bolle, Jonathan Connell, Sharanthchandra Pankanti, Nalini Ratha, andAndrew Senior. Guide to Biometrics. SpringerVerlag, 2003.
[15] Sharat Chikkerur, Alexander N. Cartwright, and Venu Govindaraju. K-pletand Coupled BFS: A Graph Based Fingerprint Representation and MatchingAlgorithm. In ICB, volume 3832 of Lecture Notes in Computer Science, pages309–315. Springer, 2006.
[16] Sharat Chikkerur, Alexander N. Cartwright, and Venu Govindaraju. Fingerprintenhancement using STFT analysis. Pattern Recogn., 40(1):198–211, 2007.
[17] Anant Choksuriwong, Helene Laurent, Christophe Rosenberger, and ChoubeilaMaaoui. Object Recognition Using Local Characterisation and Zernike Moments.In Advanced Concepts for Intelligent Vision Systems, pages 108–115, 2005.
[18] Louis Coetzee and Elizabeth C. Botha. Fingerprint recognition in low qualityimages. In Pattern Recognition, volume 26, pages 1441–1460, 1993.
[19] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.Introduction to Algorithms. MIT Press and McGraw-Hill, 2001.
[20] S.C. Dass and A.K. Jain. Fingerprint-based recognition. Technometrics,49(3):262–276, 2007.
[21] H. Faulds. On the skin-furrows of the hand. Nature, 22:605, 1880.
[22] Francis Galton. Fingerprints. Macmillan, London, 1892.
[23] International Biometric Group. Analysis of Level 3 Features at High Resolutions.http://level3tk.sourceforge.net/, 2008.
[24] L. Hong, Y. Wan, and A. Jain. Fingerprint image enhancement: algorithm andperformance evaluation. IEEE Transactions on Pattern Analysis and MachineIntelligence, 20(8):777–789, 1998.
BIBLIOGRAPHY 87
[25] Lin Hong, Yifei Wan, and Anil Jain. Fingerprint Image Enhancement: Algo-rithm and Performance Evaluation. IEEE Transactions on Pattern Analysis andMachine Intelligence, 20(8):777–789, 1998.
[26] Nalini Ratha Ibm, Nalini K. Ratha, Vinayaka D. Pandit, Ruud M. Bolle, andVaibhav Vaish. Robust Fingerprint Authentication Using Local Structural Sim-ilarity. In In Workshop on applications of Computer Vision, pages 29–34, 2000.
[27] A. Jain and S. Pankanti. Fingerprint classification and matching. Michigan StateUniversity, 1999.
[28] A. K. Jain, A. Ross, and S. Prabhakar. Fingerprint Matching Using Minutiaeand Texture Features. In Proc. Int. Conf. on Image processing, pages 282–285,2001.
[29] A.K. Jain, Y. Chen, and M. Demirkus. Pores and Ridges: High-ResolutionFingerprint Matching Using Level 3 Features. PAMI, 29(1):15–27, January 2007.
[30] AK Jain, L. Hong, S. Pankanti, and R. Bolle. An identity-authentication systemusing fingerprints. Proceedings of the IEEE, 85(9):1365–1388, 1997.
[31] A.K. Jain, S. Prabhakar, Lin Hong, and S. Pankanti. FingerCode:A filterbankfor fingerprint representation and matching. In Proc. of IEEE Computer SocietyConference on Computer Vision and Pattern Recognition, volume 2, pages –193,1999.
[32] AK Jain, A. Ross, and S. Prabhakar. An introduction to biometric recogni-tion. IEEE Transactions on circuits and systems for video technology, 14(1):4–20,2004.
[33] Anil Jain, Yi Chen, and Meltem Demirkus. Pores and Ridges: FingerprintMatching Using Level 3 Features. In ICPR ’06: Proceedings of the 18th Inter-national Conference on Pattern Recognition, pages 477–480, Washington, DC,USA, 2006. IEEE Computer Society.
[34] Anil Jain, Lin Hong, and Ruud Bolle. On-Line Fingerprint Verification. IEEETransactions on Pattern Analysis and Machine Intelligence, 19(4):302–314, 1997.
[35] Anil K. Jain and David Maltoni. Handbook of Fingerprint Recognition. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2003.
[36] Tsai-Yang Jea. Minutiae-based partial fingerprint recognition. PhD thesis, Buf-falo, NY, USA, 2005. Adviser-Govindaraju, Venugopal.
BIBLIOGRAPHY 88
[37] Xudong Jiang and Wei-Yun Yau. Fingerprint minutiae matching based on thelocal and global structures. In Proceedings of 15th International Conference onPattern Recognition, volume 2, pages 1038–1041, 2000.
[38] K. Karu and A.K. Jain. Fingerprint Classification. Pattern recognition,29(3):389–404, 1996.
[39] M. Kawagoe and A. Tojo. Fingerprint pattern classification. Pattern Recognition,17(3):295–303, 1984.
[40] K. Kryszczuk, A. Drygajlo, and P. Morier. Extraction of Level 2 and Level3 Features for Fragmentary Fingerprints. In Proc. Second COST Action 275Workshop, pages 83–88, 2004.
[41] K. Kryszczuk, P. Morier, and A. Drygajlo. Study of the Distinctiveness of Level2 and Level 3 Features in Fragmentary Fingerprint Comparison. In Proc. ofBiometric Authentication Workshop, pages 124–133, May 2004.
[42] Miroslav Krlk and Ladislav Nejman. Fingerprints on artifacts and historicalitems: examples and documents. Journal of Ancient Fingerprints, August 2007.
[43] Simon X. Liao and Miroslaw Pawlak. On Image Analysis by Moments. IEEETrans. Pattern Anal. Mach. Intell., 18(3):254–266, 1996.
[44] E. Locard. Les Pores et L’Identification Des Criminels. Biologica: Revue Scien-tifique de Medicine, 2:357–365, 1912.
[45] D. Maio and D. Maltoni. Direct gray-scale minutiae detection in fingerprints.IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(1):27–40,1997.
[46] D. Maio, D. Maltoni, R. Cappelli, J.L. Wayman, and A.K. Jain. FVC2000:Fingerprint Verification Competition. IEEE Transactions on Pattern Analysisand Machine Intelligence, 24(3):402–412, 2002.
[47] BM Mehtre and B. Chatterjee. Segmentation of fingerprint imagesa compositemethod. Pattern Recognition, 22(4):381–385, 1989.
[48] S. Mohammed, L. Sylvie, V. Vincent, and B. El-Mouldi. Face Detection byneural network trained with Zernike moments. In International Conference onSignal Processing, Robotics and Automation, pages 36–41, 2007.
[49] R. Mukundan and K. R. Ramakrishnan. Zernike Moments. Moment functionsin image analysis-theory and applications, pages 57–69.
BIBLIOGRAPHY 89
[50] N.J. Naccache and R. Shinghal. An investigation into the skeletonization ap-proach of Hilditch. Pattern Recognition, 17(3):279–284, 1984.
[51] K. Nandakumar and A. K. Jain. Local correlation-based fingerprint matching. InIndian Conference on Computer Vision, Graphics and Image Processing, pages503–508, 2004.
[52] Loris Nanni and Alessandra Lumini. Local binary patterns for a hybrid finger-print matcher. Pattern Recogn., 41(11):3461–3466, 2008.
[53] S. B. Needleman and C. D. Wunsch. A general method applicable to the searchfor similarities in the amino acid sequence of two proteins. J Mol Biol, 48(3):443–453, March 1970.
[54] H.v.d Nieuwendijk. Fingerprints. http://www.xs4all.nl/~dacty/minu.htm,October 2006.
[55] Timo Ojala, Matti Pietika Inen, and Topi Maenpa A. Multiresolution gray-scaleand rotation invariant texture classification with local binary patterns. IEEETransactions on Pattern Analysis and Machine Intelligence, 24:971–987, 2002.
[56] Sharath Pankanti, Salil Prabhakar, and Anil K. Jain. On the Individuality ofFingerprints. IEEE Trans. Pattern Anal. Mach. Intell., 24(8):1010–1025, 2002.
[57] T. Pavlidis. Algorithms for Graphics and Imag. WH Freeman & Co. New York,NY, USA, 1983.
[58] S. Ranade and A. Rosenfeld. Point Pattern Matching by Relaxation. PatternRecognition, 12(4):269–275, 1980.
[59] Nalini K. Ratha, Kalle Karu, Shaoyun Chen, and Anil K. Jain. A Real-TimeMatching System for Large Fingerprint Databases. IEEE Transactions on Pat-tern Analysis and Machine Intelligence, 18(8):799–813, 1996.
[60] N.K. Ratha, S. Chen, and A.K. Jain. Adaptive flow orientation-based featureextraction in fingerprint images. Pattern Recognition, 28(11):1657–1672, 1995.
[61] NK Ratha, K. Karu, S. Chen, and AK Jain. A real-time matching system forlarge fingerprint databases. IEEE Transactions on Pattern Analysis and MachineIntelligence, 18(8):799–813, 1996.
[62] M. Ray, P. Meenen, and R. Adhami. A novel approach to fingerprint poreextraction. Southeastern Symposium on System Theory, pages 282–286, 2005.
BIBLIOGRAPHY 90
[63] A. R. Roddy and J. D. Stosz. Fingerprint Features - Statistical Analysis andSystem performance Estimates. In Proc. IEEE, volume 85, pages 1390–1421,1997.
[64] A. Ross, A. K. Jain, and J. Reisman. A hybrid fingerprint matcher. In Proceedingsof Int. Conf. on Pattern Recognition, volume 3, pages 795–798, 2002.
[65] G. S. Sodhi and Jasjeet Kaur. The forgotten Indian pioneers of fingerprintscience. Current Science, 88(1), January 2005.
[66] G.C. Stockman, S. Kopstein, and S. Benett. Matching Images to Models forRegistration and Object Detection via Clustering.
[67] Jonathan D. Stosz and Lisa A. Alyea. Automated System for Fingerprint Au-thentication Using Pores and Ridge Structure. In Proc. of Automatic Systemsfor the Identification and Inspection of Humans, volume 2277, pages 210–223,1994.
[68] Vivek A. Sujan and Michael P. Mulqueen. Fingerprint identification using spaceinvariant transforms. Pattern Recogn. Lett., 23(5):609–619, 2002.
[69] Abdenour Hadid Timo Ahonen and Matti Pietikainen. Face description withlocal binary patterns: Application to face recognition. IEEE Transactions onPattern Analysis and Machine Intelligence, 28:2037–2041, 2006.
[70] Mayank Vatsa, Richa Singh, Afzel Noore, and Max M. Houck. Quality-augmented fusion of level-2 and level-3 fingerprint information using DSm theory.Int. J. Approx. Reasoning, 50(1):51–61, 2009.
[71] P. Ying-Han, T.B.J. Andrew, N.C.L David, and F. S. Hiew. Palmprint Verifica-tion with Moments. Journal of Computer Graphics, Visualization and ComputerVision (WSCG), 12(1-3):325–332, Febuary 2003.
[72] F. Zernike. Beugungstheorie des schneidenverfahrens und seiner verbessertenform, der phasenkontrastmethode. Physica, 1:689–704, 1934.