Date post: | 11-Dec-2016 |
Category: |
Documents |
Upload: | ghulam-muhammad |
View: | 213 times |
Download: | 1 times |
Arab J Sci Eng (2013) 38:849–857DOI 10.1007/s13369-012-0524-7
RESEARCH ARTICLE - COMPUTER ENGINEERING AND COMPUTER SCIENCE
Feature Selection Based Verification/Identification SystemUsing Fingerprints and Palm Print
Muhanad M. Jazzar · Ghulam Muhammad
Received: 19 February 2011 / Accepted: 17 April 2011 / Published online: 9 January 2013© King Fahd University of Petroleum and Minerals 2012
Abstract In this paper, we propose a multimodal biomet-ric system with high security level based on finger and palmprint. Verification/identification is performed by fusing fea-tures form different finger and palm prints. The featuresare extracted using Zernike moments (ZM) invariant algo-rithm. Fisher’s ratio (F-ratio) is used to select the optimummoments. The experimental results show that ring finger-print, palm print, middle fingerprint, and thumb fingerprintgive better performance than other fingerprints. ApplyingF-ratio on these four prints can recognize all the 50 indi-viduals participated in the experiments.
Keywords Multimodal biometric system · Fingerprints ·Palm print · Zernike moments · Fisher ratio
M. M. JazzarNational Information Center, Ministry of Interior,Riyadh, Kingdom of Saudi Arabia
G. Muhammad (B) · M. M. JazzarDepartment of Computer Engineering, College of Computerand Information Sciences, King Saud University,PO Box: 51178, Riyadh, Saudi Arabiae-mail: [email protected]
1 Introduction
Fingerprint is the oldest biometric technology and it wasmostly used in criminal identification [1]. Biometrics refersto using the biological or behavioral characteristics of humanfor automatic authentication or recognition. Fingerprint rec-ognition to resolve the identity of a person can be dividedinto two types of problems: verification and identification.Verification refers to a problem where a test subject is com-pared using his/her fingerprint against a specific subject inthe enrolled fingerprint database to verify his/her claimedidentity. On the other hand, in identification a test sub-ject’s fingerprint is compared against all the subjects inthe fingerprint database to know the identity of the testsubject.
The biological biometrics is based on direct measurementof a part of the body. Some of the examples are fingerprint,palm print, hand geometry, facial features, iris, retina, andDNA. Fingerprints and palm prints are stable throughout oneslife time, unique, reliable, and easy to use [1].
This paper proposes a verification and identification sys-tem based on five different fingerprints and palm print. Thesystem comprises of several steps. At first, person’s handimage is acquired through the scanner, and preprocessing onthe image is done to convert it into binary. Then features areextracted using Zernike moments. Next, features of differ-ent finger and palm prints are fused in an optimal way. Toselect optimal features, Fisher ratio (F-ratio), which has astrong discriminate capability, is used. Finally, decision ismade using minimum Euclidean distance.
The rest of the paper is organized as follows. Review ofrelated work is presented in Sect. 2. Section 3 describes theproposed system, reviews Zernike moments, and explainsF-ratio for choosing the optimal features. Implementation,experimental results, and analysis are reported in Sect. 4.
123
850 Arab J Sci Eng (2013) 38:849–857
Finally, Sect. 5 draws some conclusions with possible futureworks.
2 Literature Review
Several approaches are used to extract and compare finger-print patterns; all of these are based on two basic algorithms:ridge spectral pattern-based algorithms and minutiae-basedalgorithms. In the ridge spectral pattern-based algorithms,image is divided into small square areas. Ridge wavelength,direction, and phase displacement for each area are encodedand used as a basis template. This kind of algorithm usealigning and overlaying segments of fingerprint images todetermine the similarity [1].
Most fingerprint identification systems are based on minu-tiae-base algorithm, because fingerprint image producesbetween 15 to 70 minutiae, depending on the portion of thecaptured image. They plot the relative position and type ofpoints where ridge lines branch apart or terminate. Mehtre[2] uses primitive methods to extract the ridges and obtainskeleton image (thinning). Then, minutiae are extracted bycounting the connection numbers. This method is not effi-cient enough, because it is affected by noise and depends onthe quality of the image. Maio and Maltoni [3] present a novelapproach to extract minutiae directly from gray-level finger-print images. Their algorithm is to follow the ridge until itterminates or intersects with other ridge, to detect the minu-tiae. This approach is not using the features like thinningor ridge extraction, but still has some errors mainly due tosome termination minutiae which are detected as bifurcationminutiae. In particular, if a termination minutia is very closeto another ridge line the algorithm may skip the terminationand intersect the adjacent ridge line. Jiang et al. improve thealgorithm presented in [2], by using variable step size anddirectional filtering [4]. This approach is similar to that in[2] but differs in filtering the selected image pixels that needto be smoothed. This speeds up the tracing and maintainingthe tracing precision.
Moment-based image invariants have the invariance prop-erty against affine transformations including rotation, scal-ing and translation [5]. However, computing higher ordermoment invariants is quite complex, and to reconstruct theimage is very difficult. To solve these problems, Amayehet al introduce the concept of Zernike moments to recoverthe image from moment invariants based on the theory oforthogonal polynomials [6]. They apply Zernike moments torecognize hand shapes rather than prints. Abdel Qader et al.[7] use Zernike moments to extract features from fingerprints.Their approach is successful for a limited data obtained froma FVC2002 DB1 database.
Wu et al. [8] use valley features for palm print based ver-ification. This approach uses the bothat operation to extract
the valleys from a very low resolution palm image in dif-ferent directions to form the valley features. Bothat is anoperation where the input print (or image) is subtracted fromits closing with structuring element. This operation empha-sizes dark lines on light background and therefore beneficialto enhancing prints. Their experiments obtain 98 % accu-racy in palm print verification. Kong and Zhang [9] use thecompetitive coding scheme and angular matching for palmprint recognition. They use 2D Gabor filters to extract ori-entation information from palm lines, and store it in a fea-ture vector called the Competitive Code. This approach canmake over 9,000 comparisons in about one second with highacceptance rate of 98.4 % and low false acceptance rate of3 × 10−6 %. Goh et al. introduce some improvement overthe method in [9], in order to reduce the computation com-plexity [10]. They adopt wavelet transform to decompose thepalm print image into lower resolution. They show that whenintegrated with 2D Gabor filter, wavelet transform gives veri-fication rate of 96.70 % with improved speed of less than onesecond.
Ferrer et al. [11] fuse hand geometry, palm and fingerprint texture for a multimodal biometric identification sys-tem. They use support vector machine (SVM) as a verifier,and 2D Gabor phase encoding based features for texture.Fusion at feature, score and decision levels showed improvedperformance in their experiments. Guo and Ma [12] fuse theabove three modalities based on multimodal image qualityestimation. They tested the system on a database of 100 per-sons and the experimental results justified the fusion. Zhu andZhang [13] present a multimodal biometric identification sys-tem based on finger geometry, knuckle print and palm printfeatures of the human hand. The decision level AND rulefusion is adopted which has shown the improvement of thecombined scheme.
Though there exist some works on multimodal fusion forbiometric verification, the performance deteriorates if thecapturing device is not of high resolution. Also very few canbe found on selecting optimal features for such verification.
3 The Proposed Method
The proposed system relies on the low resolution scanningdevice for fingerprint and palm print acquisition, and it selectsoptimal features to give the best performance. The proposedverification/identification system is illustrated in Fig. 1. Thesystem is composed of six steps. The first step is the acqui-sition of hand images (fingerprints and palm print). The sec-ond step converts color images to binary images. ZernikeMoments (ZM) invariants are applied to extract features fromthe binary prints in the third step. The fourth step fuses fea-tures and applies F-ratio to choose the optimal ones. In thefifth step, Euclidean distance between the test image features
123
Arab J Sci Eng (2013) 38:849–857 851
Fig. 1 Flow chart of theproposed system
and the stored features is used to calculate the similarity. Thefinal step makes a decision based on the similarity whether aclaimant should be accepted or rejected.
3.1 Acquisition
The study in this paper is carried out on a database of 50people of which 32 are females and 18 are males. They areof different nationalities—Saudis, Egyptians, Indonesians,Nigerians, Indian, Sri Lankan, Bangladeshis and Philippines.Their ages range between 16 and 75 years. The hand printsare captured using a 2,400 × 4,800 CanoScan LIDE 100scanner with a plate covered with a devoid black paper fromfingers and palm side in order to reduce the effect of the lightand remove the scanner’s radiation impact (see Fig. 2). Eachperson had to put his/her right hand making sure that it fit-ted with the black paper. Four fingerprints (index, middle,ring, and pinkie) and one palm print were scanned in thisway. Thumbs were scanned separately in order to get a clearimage, because thumb prints cannot be scanned together withthe same acquisition on a simple scanner. Six impressions ofhand and thumb for each person were taken, yielding 300hand images (6 × 50) in the database.
Fig. 2 Fingerprint and palm print acquisition using scanner
3.2 Segmentation
We crop a region of interest (ROI), which is a small regioncentered at each fingerprint and palm print. The ROI isdefined in w ×w square shape, where w is in pixel. The ROI
123
852 Arab J Sci Eng (2013) 38:849–857
sizes for middle, ring, and pinkie fingerprints are 145 × 145,for index fingerprint is 152 × 152, for thumb fingerprint is199 × 199, and for palm print is 909 × 909.
3.3 Preprocessing
After getting each color fingerprint and palm print, we con-vert the images into binary. While doing the conversion, someenhancement is also performed as follows [14]:
1. identify ridge-like regions by using a threshold and nor-malize image so that the ridge regions have zero meanand unit standard deviation,
2. determine ridge orientations by calculating image gradi-ents using Gaussian filter to estimate local ridge orienta-tion at each point,
3. determine ridge frequency values across the image byusing median filter,
4. apply filters to enhance the ridge pattern,5. binarize the ridge pattern. This step produces binary
images as in Fig. 3.
3.4 Feature Extraction
Moments and moments-based image invariants have beenused in image recognition extensively [15]. Hu introduced
seven moment-based image invariants in [9] that have theinvariance property against affine transformations includ-ing rotation, scaling and translation. However, to computethe higher order of Hu’s moment invariants is quite com-plex, and to reconstruct the image from Hu’s invariants isalso very difficult. To solve these problems, Mukundan andRamakrishnan [16] introduced the concept of Zernikemoments to recover the image from moment invariants basedon the theory of orthogonal polynomials. Zernike momenthas the invariance property against image rotation, scalingand translation, which is very desirable for robust finger/palmprint matching.
Zernike moments are based on a set of complex polyno-mials {Vnm(x, y)} that form a complete orthogonal set of theunit circle [23,24]. The basis functions are given by:
Vnm(x, y) = Vnm(r, θ) = Rnm(r)e jmθ (1)
where n is a non-negative integer; m is a non-zero integersubject to constraints n − |m| is even, and |m| ≤ n, r =√
x2 + y2 is the length of the vector from origin to thepixel (x, y); θ = arctan(
yx ) is the angle between the
vector r and x-axis in a counterclockwise direction; andRnm(r) is the Zernike radial polynomial which is definedas follows:
Fig. 3 Prints after binarization: a index, b middle, c ring, d pinkie, e thumb, and f palm
123
Arab J Sci Eng (2013) 38:849–857 853
Rnm(r) =(n−|m|)/2∑
k=0
(−1)k(n − k)!k!
[n+|m|
2 − k]![
n−|m|2 − k
]!rn−2k (2)
Rnm(r) =(n−|m|)/2∑
k=0
βnmrn−2k (3)
The two-dimensional Zernike moment of order n with repe-tition m for function f (x, y) is defined as:
Znm = n + 1
π
∫ ∫
x2+y2≤1
f (x, y)V ∗nm(x, y)dxdy (4)
Znm = δnpδmq (5)
where
δab ={
1 if a = b0 otherwise
(6)
To compute the Zernike moments of a digital image, the inte-grals are changed with the summations [15]:
Mnm = n + 1
π
∑ ∑
x2+y2≤1
f (x, y)V ∗nm(x, y) (7)
where V ∗n,m(x, y) = Vn−m(x, y) is the complex conjugate of
Vnm(x, y).To compute the Zernike moments of a given image, the
center of mass of the object is taken to be the origin. Themagnitude of the Zernike moments is rotation invariant by itsdefinition. Taking the center of mass of the object as the ori-gin of the coordinate system makes them translation invariantas well. Additionally to provide scale invariance, the objectis scaled inside the unit circle.
The translation invariance is achieved by translating theoriginal image f (x, y) to f (x + x, y + y), where x =m10 / m00 and y = m10/m00 .
The fundamental feature of the Zernike moments is theirrotational invariance. If f (x, y) is rotated by an angle α, thenwe can obtain the Zernike moment Znm of the rotated imageas:
Z′nm = Znme− jma (8)
Thus, the magnitudes of the Zernike moments can be usedas rotationally invariant image features.
Applying 10-order Zernike moments, we get 36 featuresfor each finger and palm. Thereby we have 36 × 6 = 216features for each instance of a person. Not all of thesefeatures are independent to each other, nor they all arerelevant to some particular tasks. Feature selection is animportant aspect in any pattern recognition applications toobtain optimal features from a set of features. Many typesof feature selection methods are proposed in the literature[17,18]. F-ratio is widely used for its powerful discriminatecapability [19]. F-ratio takes mean and variance of each fea-ture. For a two class problem, the ratio of the i th dimension
in the feature space can be expressed as follow:
fi = (μ1i − μ2i )2
δ21i − δ2
2i
(9)
where μ1i , μ2i , δ11i , and δ2
2i are the mean values and vari-ances of the i th feature to class 1 and class 2, respectively.
The maximum fi over all the feature dimensions can beselected to describe a problem. Higher F-ratio implies bet-ter features for a given classification problem. For M num-ber of users and N features, Eq. (9) will produce C M
2 × Nentries (row × Column). The overall F-ratio for each fea-ture is calculated using column wise means and variances asfollows:
Fi = μ2
δ2 (10)
where μ and δ2 are means and variances of F-ratio oftwo-class combinations for the i th feature.
3.5 Similarity Measure (classification)
It has been shown that in biometric recognition systems,the highest performance is obtained using SVM classifier[20,21], Bayes classifier [22], and Neural Network [23,24].However, in our proposed method, computationally inexpen-sive Euclidean distance is applied to compute the similaritybetween the test finger/palm prints and the stored finger/palmprints in the database.
If p = {p1, p2, . . . , pn} and q = {q1, q2, . . . , qn} are twovectors, then Euclidean distance is defined as:
d(p, q) =√√√√
n∑
i=1
(pi − qi )2 (11)
3.6 Feature Fusion
In our proposed system, we work on the feature levelfusion, where all the features from different modalities aremerged to produce a better result. We apply F-ratio to selectthe appropriate features from all the features to produceoptimum result.
Different combinations of feature level fusion are investi-gated in this paper. They are described as follows:
1. Fusion 1: Complete fusion with 36 moments (features)for each finger (five different fingers) and palm, resultingin 216 features as shown in Fig. 4.
2. Fusion 2: Fusion of 36 moments each from the four bestranked prints as shown in Fig. 5.
123
854 Arab J Sci Eng (2013) 38:849–857
Fig. 4 Complete featurefusion: Fusion 1 in the text
Fig. 5 Feature fusion from the four best ranked prints: Fusion 2 in thetext
Fig. 6 Feature fusion after applying F-ratio from the four best rankedprints: Fusion 3 in the text
3. Fusion 3: First obtain the optimal 20 moments from the36 moments using F-ratio for each of the four best rankedprints. Then fuse these optimal moments to get 80 fea-tures, as shown in Fig. 6.
Fig. 7 Feature fusion from the four best ranked prints and apply F-ratioto obtain optimal features: Fusion 4 in the text
4. Fusion 4: First fuse 36 × 4 moments from the four bestranked prints, and then apply F-ratio to obtain N optimalfeatures as illustrated in Fig. 7.
4 Experimental Results and Discussion
Fingerprints and palm prints of 50 people of different age,sex, and ethnicity are used in the experiments. We evaluatethe systems using True Acceptance Rate (TAR) and FalseRejection Rate (FRR), which are calculated as follows:
FRR = Number of false rejected individuals
Total number of individuals× 100 %
TAR = 100 % − FRR
Table 1 shows TAR and FRR for each individual print.From the table we can see that the highest performance(TAR) is 70 % achieved with ring fingerprint, followed by66 % with palm print. The least performance is 50 % withpinkie fingerprint. Table 1 also compares performances ofthe ZM-based method to a principal component analysis(PCA)-based method [25]. We choose a PCA-based method
123
Arab J Sci Eng (2013) 38:849–857 855
Table 1 TAR and FRRof individual print ZM-based method PCA-based method
TAR (%) FRR (%) TAR (%) FRR (%)
Index 52 48 49 51
Middle 61 39 58 42
Ring 70 30 66 34
Pinkie 50 50 47 53
Thumb 61 39 59 41
Palm 66 34 62 38
for comparison because PCA is one of the mostly used tech-niques in recognition systems. In the PCA-based experiment,the highest 300 principal components are used for each print,and Euclidean distance is used as classifier. The Table showssuperiority of the ZM-based method over the PCA-basedmethod in terms of TAR%.
Complete fusion (Fusion 1 and Fig. 4) is achieved bycombining the feature vectors of each print into a singlefeature vector yielding 216 features. Similarities are calcu-lated between an individual and his/her template stored inthe database using Euclidean distance. The minimum dis-tance is used to determine the similarity between the test pat-tern and the enrolled template. This experiment gives TARof 88 %, a huge gain in performance compared to the pre-vious approach where the maximum TAR is 70 % for ringfingerprint (Table 1). Using PCA-based method, Fusion 1produces a total of 1,800 features and gives TAR of 78 %,which is much less than that obtained using ZM. The follow-ing experiments with Fusion 2–4, we use only the proposedZM-based method.
4.1 Recognizing Using the Best Four Prints (Fusion 2)
As shown in Table 1, middle, ring, thumb, and palm are thehighest ranking prints in term of TAR. We interlaced theirfeatures into one matrix; producing 144 features (see Fig. 5).With this fusion, TAR increased to 94 %.
4.2 Recognizing Using Optimal N Moments from the BestFour Prints
In this experiment, we use the four best prints: middle, ring,thumb, and palm. Optimal features are extracted from themoments of these prints. F-ratio is used to select the opti-mal features. Two approaches, names Fusion 3 (Fig. 6) andFusion 4 (Fig. 7) are used. With Fusion 3, TAR increased to98 %.
Using Fusion 4 with 40 optimal features (N = 40 in Fig. 7),TAR remains the same (98 %) as with Fusion 3. F-ratiochooses the optimal 40 moments as follows: 22.5 % middle’smoments (9 moments), 25 % ring moments (10 moments),
Table 2 TAR (%) obtained using different types of fusion
Different fusion TAR (%)
Fusion 1 88
Fusion 2 94
Fusion 3 98
Fusion 4 (N = 40) 98
Fusion 4 (N = 36) 100
Table 3 Execution time (in seconds) of the two methods for Fusion 1
The proposed ZM-basedmethod
PCA-based method
Feature extraction 0.102 0.212
Matching 0.002 0.014
32.5 % thumb’s moments (13 moments), and 20 % palm’smoments (8 moments).
We reduce the number of optimal features to N = 36 inFusion 4. In this experiment, TAR increased to 100 %, whichmeans that all the individuals are accepted in this approach.Of the 36 moments, middle contributes 8 moments (22.22 %of the total moments), ring participates in 9 moments (25 %),thumb yields 13 moments (36.11 %), and palm contributes6 moments (16.67 %). Table 2 summarizes TAR (%) usingdifferent types of fusion.
4.3 Execution Time Comparison
The proposed ZM-based method is compared with the PCA-based method in Fusion 1 experiment in terms of executiontime. The codes are developed in MATLAB R2009b and themachine is dual core 2.10 GHz. The PCA-based involves sin-gular point detection and a huge Euclidean distance matrixof 1,800 principal components. On the other hand, thoughZM extraction is computationally expensive, the proposedmethod compensates it by not involving singular point detec-tion and using a smaller Euclidean matrix of 216 moments.Table 3 shows execution time in seconds for both themethods.
123
856 Arab J Sci Eng (2013) 38:849–857
Table 4 EER (%) obtained using different types of fusion
Different Fusion EER (%)
Fusion 1 0.34
Fusion 2 0.24
Fusion 3 0.20
Fusion 4 (N = 40) 0.18
Fusion 4 (N = 36) 0.17
4.4 Verification
For person verification, the system differentiates genuineprints from imposters’ prints as the user provides his/her fin-gerprints and palm print images in support of his/her claimedidentity. For this purpose, we calculate the Euclidean distancebetween the test pattern of the applicant and his/her templatein the database. If the distance is below a threshold, verifi-cation is successful; otherwise, the subject is rejected. In thefollowing subsections, we present the verification results ofseveral different experiments.
In the verification, we use four samples for each subject asenrollment template. We repeat the experiments many timeswith different thresholds. The remaining two samples areused to construct matching and non-matching sets and esti-mate the FRR and FAR (False Accept Rate). FAR is definedas
FAR = Number of false rejected individuals
Total number of imposters× 100 %
Equal Error rate (EER) is the percentage when FAR and FRRare equal. This is the most important measure of biometricsystem accuracy. In the experiments, we calculate EER fordifferent types of fusion. Table 4 shows EER obtained usingdifferent experiments. Fusing all prints’ moments (Fusion 1)gives EER = 0.34 %, which is comparatively higher. Fusion 1with the PCA-based method described earlier gives EER =0.41 %.
Verifying using the best four prints (Fusion 2) givesEER = 0.24 %, while using Fusion 3, we get EER of 0.20 %.The best EER of 0.17 % is obtained using Fusion 4 withN = 36. Figure 8 shows ROC graph for Fusion 4 with N =36.
From the above results, we can see that fusing differentfingerprints and palm print improves biometric system accu-racy. The ring fingerprint provides the best performance incase of individual performance. This can be attributed to thefact that the ring finger is not exposed to many works andtherefore its prints are not worn out.
The application of F-ratio by selecting optimal featuressignificantly improves identification and verification perfor-mances. In comparison to the method described in [7], whereF-ratio was not applied, our proposed method enhances
Fig. 8 ROC graph for Fusion 4 with N = 36. EER is 0.17 %
TAR to 100 % (Fusion 4) from 88 % (Fusion 1) and 94 %(Fusion 2). It can be mentioned that Fusion 1 and 2 do notuse F-ratio.
5 Conclusion
We presented a new approach to fingerprint and palm printbased identification and verification. The print features areextracted using Zernike moments. Different types of fusionare also used. F-ratio is used to select the optimum moments.The experimental results show that applying F-ratio andEuclidean distance on a medium size database gives 100 %TAR and 0.17 % EER, which is considered very good. Theevaluation of the proposed method on a larger database willbe carried out in a future study.
Acknowledgments This work is supported by the Research Centerof College of Computer and Information Sciences under the Deanshipof Scientific Research, King Saud University, Riyadh, Saudi Arabia.The authors gratefully acknowledge Prof. George Bebis, Professor inthe Dept. of Computer Science and Engineering, University of Nevada,Reno, and a visiting professor at King Saud University, Saudi Arabia,for his codes on Zernike moments and advice throughout the work.
References
1. Krause, M.; Harold, F.T.: Information security management hand-book. Auerbach Publications, New York (1999)
2. Mehtre, B.M.: Fingerprint image analysis for automatic identifica-tion. Mach. Vis. Appl. 6, 124–139 (1993)
3. Maio, D.; Maltoni, D.: Direct gray-scale minutiae detection in fin-gerprints. IEEE Trans. Pattern Anal. Mach Intell. 19, 27–40 (1997)
4. Jiang, X.; Yau, W.Y.; Ser, W.: Minutiae extraction by adaptivelytracing the gray level ridge of the fingerprint image. IEEE 6th Inter-national Conference on Image Processing, pp. 852–856 (1999)
5. Hu, M.K.: Visual pattern recognition by moment invariants. IRETrans. Inform. Theory. 2, 179–187 (1962)
6. Amayeh, G.; Bebis, G.; Erol, A.; Nicolescu, M.: Hand-based ver-ification and identification using palm-finger segmentation andfusion. Comput. Vis. Image Underst. 113, 477–501 (2009)
7. Abdel Qader, H.; Ramli, A.R.; Al-Haddad, S.: Fingerprint rec-ognition using Zernike moments. Int. Arab J. Inf. Technol. 4(4),372–376 (2007)
123
Arab J Sci Eng (2013) 38:849–857 857
8. Wu, X.Q.; Wang, K.Q.; Zhang, D.: Palmprint recognition usingvalley features. 4th International Conference on Machine Learningand Cybernetics, vol. 8, pp. 4881–4885 (2005)
9. Kong, W.K.; Zhang, D.: Competitive coding scheme for palmprintverification. 17th International Conference on Pattern Recognition,vol. 1, pp. 520–523 (2004)
10. Goh, M.; Connie, T.; Teoh, A.B.; Ngo, D.C.: A fast palm print ver-ification system. International Conference on Computer Graphics,Imaging and Visualization, pp. 168–172 (2006)
11. Ferrer, M.A.; Morales, A.; Travieso, C.M.; Alonso, J.B.: Low costmultimodal biometric identification system based on hand geom-etry, palm, and finger print texture. The 41st Annual IEEE Inter-national Carnahan Conference on Security Technology, pp. 52–58(2007)
12. Guo, L.; Ma, B.: Three modals biometric information fusion basedon image quality estimation. The 2nd International Conferenceon Bioinformatics and Biomedical Engineering (ICBBE 2008),pp. 1193–1195 (2008)
13. Zhu, L.; Zhang, S.: Multimodal biometric identification systembased on finger geometry, knuckle print and palm print. PatternRecognit. Lett. 31, 1641–1649 (2010)
14. Fingerprint enhancement. http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/index.htm. Last access on May 18, 2010
15. Khotanzad, A.; Hong, Y.H.: Invariant image recognition by ZernikeMoments. IEEE Trans. Pattern Anal. Mach. Intell. 12, 489–498(1990)
16. Mukundan, R.; Rmakrishnan, K.R.: Moments functions in imageanalysis theory and applications. World Scientific Publishing,Singapore (1998)
17. Zhang, L.; Sun, G.; Guo, J.: Feature selection for pattern classifica-tion problem. The 4th International Conference on Computer andInformation Technology, pp. 233–237 (2004)
18. Mitra, P.; Murthy, C.A.; Pal, S.K.: Unsupervised feature selectionusing feature similarity. IEEE Trans. Pattern Anal. Mach. Intell.24(3), 301–312 (2002)
19. Ho, T.; Basu, M.: Complexity measures of supervised classificationproblems. IEEE Trans. Pattern Anal. Mach. Intell. 24(3), 289–300(2004)
20. Yaghoubi, Z.; Faez, K.; Eliasi, M.; Eliasi, A.: Multimodal bio-metric inspired by cortex and support vector machine classifier.Multimedia Computing and Information Technology, 2010 Inter-national Conference, pp. 93–96 (2010)
21. Dinerstein, S.; Dinerstein, J.; Ventura, D.: Robust multi-modal bio-metric fusion via multiple SVMs. Systems Man Cybern 2007, IEEEInternational Conference, pp. 1530–1535 (2007)
22. Yang, S.; Song, J.; Rajamani, H.; Cho, T.; Zhang, Y.; Mooney, R.:Fast and effective worm fingerprinting via machine learning.Autonomic Computing 2006, IEEE International Conference,pp. 311–313 (2006)
23. Ooi, S.Y.; Teoh, A.B.J.; Ong, T.S.: Compatibility of biometricstrengthening with probabilistic neural network. Biometric andSecurity Technologies 2008, International Symposium, pp. 1–6(2008)
24. Zhang, D.; Zuo, W.: Computational intelligence-based biometrictechnologies. Comput. Intell. Mag. IEEE 2, 26–36 (2007)
25. Wang, Y.; Ao, X.; Du, Y.; Li, Y.: A fingerprint recognition algorithmbased on principal component analysis. IEEE TENCON 2006,pp. 1–4 (2006)
123