+ All Categories
Home > Documents > Decision Fusion for Multimodal Biometrics Using Social Network Analysis

Decision Fusion for Multimodal Biometrics Using Social Network Analysis

Date post: 27-Jan-2017
Category:
Upload: reda
View: 214 times
Download: 2 times
Share this document with a friend
12
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 1 Decision Fusion for Multimodal Biometrics Using Social Network Analysis Padma Polash Paul, Marina L. Gavrilova, and Reda Alhajj Abstract—This paper presents for the first time decision fusion for multimodal biometric system using social network analysis (SNA). The main challenge in the design of biometric sys- tems, at present, lies in unavailability of high-quality data to ensure consistently high recognition results. Resorting to mul- timodal biometric partially solves the problem, however, issues with dimensionality reduction, classifier selection, and aggregated decision making remain. The presented methodology successfully overcomes the problem through employing novel decision fusion using SNA. While several types of feature extractors can be used to reduce the dimension and identify significant features, we chose the Fisher Linear Discriminant Analysis as one of the most efficient methods. Social networks are constructed based on similarity and correlation of features among the classes. The final classification result is generated based on the two levels of decision fusion methods. At the first level, individual biometrics (face or ear or signature) are classified using matching score methodology. SNA is used to reinforce the confidence level of the classifier to reduce the error rate. In the second level, outcomes of classification based on individual biometrics are fused together to obtain the final decision. Index Terms—Centrality measures, confidence level of clas- sifiers, decision fusion, multimodal biometrics, social network analysis. I. I NTRODUCTION A BIOMETRIC identification or verification system is an automatic pattern recognition system that provides the facilities to use physiological or behavioral characteristics of an individual for access control solutions [1]. Physiological biometric identifiers include fingerprints, hand geometry, ear patterns, eye patterns (iris and retina), facial features, and other physical characteristics. Behavioral identifiers include voice, signature, typing patterns, and others [2]. Multimodal biometric system is a relatively new approach that com- bines multiple biometric traits to overcome the problems of a unimodal biometric system such as intraclass variability, interclass similarity, data quality, nonuniversality, sensitivity to noise, and other factors. It can improve the performance of a biometric system significantly. Furthermore, it can effec- tively reduce the spoof attack, increase the degree of freedom Manuscript received April 1, 2013; accepted August 3, 2013. This work was supported in part by a Natural Sciences and Engineering Research Council of Canada Discovery Grant, in part by University Research Grants Committee Seed Grants, and in part by an Alberta Innovates Technology Futures Scholarship. This paper was recommended by Associate Editor V. Piuri. The authors are with Computer Science Department, University of Calgary, Calgary, AB T2N1N4, Canada (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMC.2014.2331920 for the system, improve population convergence, and reduce user ties for individual biometric traits [2]. Multimodal bio- metric systems have been shown to be more efficient and reliable than unimodal systems. Although the downside of such system deployment may include additional memory, pro- cessing time, and computational requirements, the advantages often make those systems the preferred choice for deploying multibiometric systems in real-world large-scale authentication systems [3]. The main challenge in the design of highly accu- rate, secure, and commercially deployable biometric systems lies with unavailability of high-quality data to ensure consis- tently high recognition results. Due to the nature of biometric data acquisition, facial, voice or gait biometrics suffer from great variability due to different lighting conditions, distance to surveillance camera, equipment, operator training, and so on. Resorting to a multimodal biometric partially solves the problem, providing the system with more data to rely on and employing intelligent data analysis methods. However, issues with dimensionality reduction and aggregated decision making remain largely unresolved problems. This paper successfully overcomes this challenge through employing novel feature extraction and, for the first time, utilizing social network analysis (SNA) to assist with biometric recognition. While the biometric data can be easily changed or manipulated, it requires far more resources or time to change the sphere of interests, employment and the social environment of an indi- vidual. This paper presents a multimodal biometric system using two physiological biometrics (face, ear) and behavioral (signature) biometrics. The main novelty and contribution of the paper is to reduce the error rate of recognition and enhance the security of the biometric system through applying multi- modal fusion combined with SNA. As far as the authors are aware, this is the first comprehensive research study on social network (SN) application in multimodal biometric research. Multibiometric system is applied at both individual biometric traits and at the final decision level. SNA is used to improve the confidence level of the individual biometric trait classifier. Finally, the results of face, ear and signature classification are combined using the weighted score based on confidence level of each biometric trait and classifier. Effective fusion scheme for multiple biometric traits is highly important in development of a multimodal biomet- ric system. Ross et al. [4] in 2006 mentioned that the goal of the fusion is to determine the best set of experts in a given problem domain and devise an appropriate function that can optimally combine the decisions rendered by the indi- vidual experts. Fusion in a multibiometric biometric system 2168-2216 c 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Transcript

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 1

Decision Fusion for Multimodal BiometricsUsing Social Network Analysis

Padma Polash Paul, Marina L. Gavrilova, and Reda Alhajj

Abstract—This paper presents for the first time decision fusionfor multimodal biometric system using social network analysis(SNA). The main challenge in the design of biometric sys-tems, at present, lies in unavailability of high-quality data toensure consistently high recognition results. Resorting to mul-timodal biometric partially solves the problem, however, issueswith dimensionality reduction, classifier selection, and aggregateddecision making remain. The presented methodology successfullyovercomes the problem through employing novel decision fusionusing SNA. While several types of feature extractors can beused to reduce the dimension and identify significant features,we chose the Fisher Linear Discriminant Analysis as one of themost efficient methods. Social networks are constructed basedon similarity and correlation of features among the classes. Thefinal classification result is generated based on the two levels ofdecision fusion methods. At the first level, individual biometrics(face or ear or signature) are classified using matching scoremethodology. SNA is used to reinforce the confidence level of theclassifier to reduce the error rate. In the second level, outcomesof classification based on individual biometrics are fused togetherto obtain the final decision.

Index Terms—Centrality measures, confidence level of clas-sifiers, decision fusion, multimodal biometrics, social networkanalysis.

I. INTRODUCTION

ABIOMETRIC identification or verification system is anautomatic pattern recognition system that provides the

facilities to use physiological or behavioral characteristics ofan individual for access control solutions [1]. Physiologicalbiometric identifiers include fingerprints, hand geometry, earpatterns, eye patterns (iris and retina), facial features, andother physical characteristics. Behavioral identifiers includevoice, signature, typing patterns, and others [2]. Multimodalbiometric system is a relatively new approach that com-bines multiple biometric traits to overcome the problems ofa unimodal biometric system such as intraclass variability,interclass similarity, data quality, nonuniversality, sensitivityto noise, and other factors. It can improve the performanceof a biometric system significantly. Furthermore, it can effec-tively reduce the spoof attack, increase the degree of freedom

Manuscript received April 1, 2013; accepted August 3, 2013. This workwas supported in part by a Natural Sciences and Engineering ResearchCouncil of Canada Discovery Grant, in part by University Research GrantsCommittee Seed Grants, and in part by an Alberta Innovates TechnologyFutures Scholarship. This paper was recommended by Associate EditorV. Piuri.

The authors are with Computer Science Department, University of Calgary,Calgary, AB T2N1N4, Canada (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMC.2014.2331920

for the system, improve population convergence, and reduceuser ties for individual biometric traits [2]. Multimodal bio-metric systems have been shown to be more efficient andreliable than unimodal systems. Although the downside ofsuch system deployment may include additional memory, pro-cessing time, and computational requirements, the advantagesoften make those systems the preferred choice for deployingmultibiometric systems in real-world large-scale authenticationsystems [3]. The main challenge in the design of highly accu-rate, secure, and commercially deployable biometric systemslies with unavailability of high-quality data to ensure consis-tently high recognition results. Due to the nature of biometricdata acquisition, facial, voice or gait biometrics suffer fromgreat variability due to different lighting conditions, distanceto surveillance camera, equipment, operator training, and soon. Resorting to a multimodal biometric partially solves theproblem, providing the system with more data to rely on andemploying intelligent data analysis methods. However, issueswith dimensionality reduction and aggregated decision makingremain largely unresolved problems. This paper successfullyovercomes this challenge through employing novel featureextraction and, for the first time, utilizing social networkanalysis (SNA) to assist with biometric recognition. Whilethe biometric data can be easily changed or manipulated, itrequires far more resources or time to change the sphere ofinterests, employment and the social environment of an indi-vidual. This paper presents a multimodal biometric systemusing two physiological biometrics (face, ear) and behavioral(signature) biometrics. The main novelty and contribution ofthe paper is to reduce the error rate of recognition and enhancethe security of the biometric system through applying multi-modal fusion combined with SNA. As far as the authors areaware, this is the first comprehensive research study on socialnetwork (SN) application in multimodal biometric research.Multibiometric system is applied at both individual biometrictraits and at the final decision level. SNA is used to improvethe confidence level of the individual biometric trait classifier.Finally, the results of face, ear and signature classification arecombined using the weighted score based on confidence levelof each biometric trait and classifier.

Effective fusion scheme for multiple biometric traits ishighly important in development of a multimodal biomet-ric system. Ross et al. [4] in 2006 mentioned that the goalof the fusion is to determine the best set of experts in agiven problem domain and devise an appropriate function thatcan optimally combine the decisions rendered by the indi-vidual experts. Fusion in a multibiometric biometric system

2168-2216 c© 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 1. Block diagram of the proposed multimodal biometric system withthe SNA.

integrates different pieces of evidence at different levels. Itcan be subdivided into two main categories: the prematchingfusion and the post-matching fusion [6]–[9]. The prematchingbiometric fusion can be further categorized as the sensor levelfusion (deals with the raw data) and the feature level fusion(deals with the features obtained from the raw data) [6], [7].The post-matching biometric fusion typically belongs to oneof the three categories: the matching score level, the ranklevel and the decision level. In the case of the matching scorefusion, multiple classifiers output a set of match scores thatare fused to generate a single scalar score [8], [9]. The ranklevel fusion takes into an account the position of each personin the ranked list for each individual classifier. For example,Borda count (BC) method may be used to make the finaldecision based on the rank of each identity [4]. In decisionlevel fusion, the final acceptance or rejection of an identityis decided based on the combination of outcomes obtainedfrom the individual biometric classifiers using technique suchas majority voting [10].

The proposed multimodal biometric system uses face, earand signature as biometric traits. We develop a twofold mul-timodal biometric scheme. First, we apply the multimodalapproach for each of the biometric traits. Next, we combine therecognition outcomes from the three chosen biometric traits(face, ear and signature) to arrive at the final decision. SNA isutilized at both stages of the algorithm to arrive at the decisionfusion. As in the standard biometric system, we apply featureextraction prior to the classification. SN is constructed from thedatabase. SN metric values are incorporated into decision mak-ing to obtain a final system verification result. SN establishesthe overall relationship of an individual with the other personsof the database. The existence of the social connections amongindividuals motivates the use of the SN based classifier for theproposed multimodal system. The details are explained in themethodology section. Fig. 1 shows the block diagram of theproposed multibiometric system with the twofold multimodalscheme.

The rest of the paper is organized as follows. Section IIis a background study on multimodal biometrics. Section IIIdescribes the proposed methodology and introduces the ideaof using SNA. It details the SN construction and the proposedfusion method. In Section IV, experimental results are pre-sented. Finally, Section V concludes the paper with a lookinto some future research directions.

II. BACKGROUND OF MULTIMODAL BIOMETRICS

Although storing biometric traits and their use for authen-tication purposes have been topics of research for thelast twenty years, it has not been until very recently thatcombination of a number of different traits for a personauthentication has been considered. Several approaches havebeen proposed and developed for the multimodal biometricauthentication system, including fundamental 1998 work ofHong and Jain [13]. This was the first bimodal approach fora PCA-based face and minutiae-based fingerprint identifica-tion system with a fusion method at the decision level [13].In 2000, Frischholz and Dieckmann [14] presented a BioIDsystem as an example of a first commercial multimodal systemcombining facial, voice and lip movement modules.

In 2003, Fierrez-Aguilar et al. [15] suggested a multimodalapproach relying on an appearance-based face verification sys-tem, a minutiae-based fingerprint verification system, and anHMM signature verification system at the score level. Ross andJain [9] further developed a multimodal system for face, finger-print, and hand geometry, with the matching score fusion basedon sum rules, decision trees, and linear discriminant func-tions [9]. They continued to experiment with other multimodalsystem architectures and different fusion methods. An alterna-tive multimodal system based on face, fingerprint, and handgeometry with fusion at the score level was presented in [20].

A novel approach of combining learning strategies andstudying how to augment the performance of a multimodalbiometric system has been recently reported in [18]. In thispaper, chaotic neural network paradigm and dimensionality-reduction techniques were utilized to improve the performanceof the multimodal biometric system based on face, signa-ture and ear biometric traits. An investigation of advancedmathematical models which was able to achieve over 99%recognition rate while at the same time keeping FRRbelow 1% threshold was reported in [19]. In this recentwork, methods such as BC, Markov chain model and fuzzyfusion were utilized for the multimodal biometric rank levelsystem.

A comprehensive review of all new developments linkingartificial intelligence, fuzzy logic, pattern matching, and learn-ing methods to the modern state-of-the-art biometric researchcan be found in the recently released IGI book [46] as wellas the Springer book chapter [16]. Another interesting direc-tion has been reported in 2012 Issue of IEEE Robotic andAutomation Magazine [17], which links the advanced mul-timodal biometric research with authentication and securityaspects in the virtual worlds.

From the previous discussion, it can be concluded thatmany multimodal biometric systems with various methods andstrategies have been proposed over the last decade to achievethe higher recognition performance. In this context, we havealso observed that, although PCA remains a popular choice forfeature extraction, the application of fisher linear discriminantanalysis (FLDA) with extended confidence level for face, ear,and signature in the context of a multimodal system has notbeen investigated.

The decision fusion method can be applied to a varietyof classification outputs. However, training and testing time

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

PAUL et al.: DECISION FUSION FOR MULTIMODAL BIOMETRICS USING SNA 3

complexity for the classifier should be taken into an account.Another important factor is the recognition rate of a classifier.When varied performance classifiers are used in a multimodalsystem, the decision fusion outcome may not be optimal ifone classifier clearly outperforms the others. In the paper, wehave developed a method that selects the optimal classifierand provides a confidence level of the classifier based on therelationship among different individual biometrics. Increasingthe confidence level of the classifier improves the classifica-tion accuracy. In other words, it decreases the false acceptancerate (FAR) and improves the true positive rate.

According to literature review on the topic, there is noresearch reported that has focused on investigating how study-ing the SN traits might augment performance of the multi-biometric system. To the best of our knowledge, this paperpresents for the first time the methodology based on theSN traits that are used to augment the multimodal biometricsystem performance.

III. PROPOSED MULTIMODAL BIOMETRIC SYSTEM

In this section, the proposed multimodal biometric systemis described. An overview of the proposed algorithm and thefeature extraction method based on FLDA is given. The twofinal subsections outline the SN construction, analysis, andfusion of decisions using SNA.

A. Proposed Algorithm for Multimodal Biometrics

As we mentioned earlier, we have designed the proposedsystem to reduce the error rates. To meet this goal, it is impor-tant to improve the classification accuracy for each of thebiometric traits.

The first step of all biometric systems is a feature extraction.FLDA is generally known as Fisherimage feature extractionmethod (Fisherface for face biometrics is another commonterm). For biometric feature extraction, FLDA has been usedin [19], [34], and [41]. FLDA is a combination of principalcomponent analysis (PCA) and Linear Discriminant Analysis(LDA) [41]. FLDA projection is used to extract the featuresand those extracted features are used for training and testing.

After the feature extraction, a similarity matrix for the train-ing set is computed to construct the SN. From the similaritymatrix, correlation coefficients are computed and a thresholdis applied to obtain the network adjacency matrix.

For the test set, features are extracted using FLDA. k-nearestneighbor (k-NN) classifier is applied to the test set. From thisclassifier, the top three matches and their matching scores arerecorded. For each of the matched person, we next computethe similarity values with the training set samples. We thenreplace the training similarity matrix using the test similarityscore for each individual. Thus, three similarity matrices areobtained for each of the for test samples. We then construct theSNs for the given test person. All three networks are used tocalculate the SN metrics. For decision making, the differencesbetween training metrics score and the test metric score areused.

The above process is performed for each individual bio-metric trait. Thus, for each single biometric trait, the same

Fig. 2. Block diagram of proposed multimodal biometric system.(a) Multimodal approach for each biometric traits. (b) Multimodal approachto fuse the decision two different biometric traits.

procedure is applied to obtain the matching score and theSN metric values. These values are used to make the finalclassification decision. Fig. 2 shows the block diagram of theproposed system.

B. Fisherimage for Feature Extraction

In 1991, Turk and Pentland [43] introduced Eigenfacemethod in the fundamental work on face recognition. Theyhave used PCA to extract features for the face detection andrecognition system. From that work, PCA became a stan-dard tool for modern data analysis in the fields of biometricand image processing, to name a few. For the complexdatasets, PCA is a simple, nonparametric method to extract thefeatures [44].

In 1936, Fisher [45] introduced a statistical method to dis-criminate the features for classifying flowers. In his article“The Use of Multiple Measures in Taxonomic Problems,”he introduced a method to maximize the difference betweenthe classes of flowers. From the idea of Fisher [45],Belhumeur et al. [41] established the Fisherface (FLDA)method for face recognition. The method is less susceptibleto different illumination conditions, which is a downside ofan Eigenface method. They have designed a class specific

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

linear projection method for face recognition to maximizethe interclass variation and minimize the intra class simi-larity [41]. Eigenface method finds total variation of dataregardless of the class specification [43]. Using Fisherfacemethod for the proposed multimodal system allows us toidentify the discriminative class specific feature extraction.Because of the class specification and discriminability of fea-ture extraction, we have used Fisherface method instead ofEigenface as a tool to find the best features. The Fisherfacemethod uses both PCA and LDA to produce a subspace pro-jection matrix, similar to that used in the Eigenface method,and is shown to be superior to both in diverse applicationdomains [41], [43]–[45].

Let c be the number of classes for each biometric trait. Γ isthe training set and Γ i is a facial image of a specific class Xc.We can compute two scatter matrices, representing within-class (SW ), between-class (SB), and total (ST ) distributions ofthe training set through the image space [41]

SW =C∑

i=1

�k∈Xi

(�k − �i) (�k − �i)T (1)

SB =C∑

i=1

|Xi| (�i − �) (�i − �)T (2)

ST =M∑

n=1

(�n − �) (�n − �)T . (3)

In the above formulas, � = 1M

∑Mn=2 �n is the average

image vector of the entire training set and �i = 1|Xi|

∑M�i∈Xi

�i

is the average of the each individual class Xi.After applying PCA on the total scatter matrix St, we obtain

a projection matrix Upca and chose the top M − c prin-cipal components. The projection matrix is used to reducethe dimensionality of the within-class scatter matrix beforecomputing the top c − 1 eigenvectors Ufld, according to [41]

Ufld = argU max

∣∣∣UT UTpca SB Upca U

∣∣∣∣∣∣UT UT

pca SW Upca U∣∣∣

⎠ . (4)

The method then proceeds to compute the matrix Uff toproject a face image onto a reduced space of c − 1 dimensions,where the between-class scatter is maximized for all c classes,while the within-class scatter is minimized for each class Xi

Uff = Ufld Upca. (5)

Once the Uff matrix has been constructed, it is usedto generate the training and the test feature sets. Uff areused as features for the k-NN classifier. The following sec-tion describes how the k-NN classifier is used to generatethe matching score and to report the top matches in thedatabase.

C. k-Nearest Neighbor (k-NN) Classifier for Recognition

The k-nearest neighbor (k-NN) classifier is one of the sim-plest and most discriminative classifiers known in patternrecognition. k-NN classifier uses the topological similarity

in the feature space for the object recognition [11]. It usesa majority vote of the neighbors to establish the output.k is a small positive integer. When k = 1, the method usesthe similarity between the two objects. It is better to usethe odd values of k to address the voting ties. AssumeXi and Yj are the two samples that contain n features.Equations (6) and (7) represent the feature set of the biometrictemplate

Xi = {x1, x2, ... ... ... xn

}(6)

Yj = {y1, y2, ... ... ... yn

}. (7)

In the above equations, indices i and j denote ith trainingand jth test samples from the model. The similarity S betweenthe samples Xi and Yj using the different distance measure canbe calculated using

Si,j = 1 − d(Xi, Yj) (8)

where d(Xi, Yj) denotes the distance function. There are dif-ferent distance functions used in the k-NN classifier. Thesimilarity value of the k-NN algorithm can be calculated usingan Euclidean distance, a cosine distance, a city block dis-tance or others [11]. For Euclidean distance, the followingwell-known equation is used:

d(Xi, Yj) =√√√√

n∑

f =1

∣∣xf − yf∣∣. (9)

IV. DECISION FUSION USING SNA

SNA is a powerful tool that analyzes the relationships interms of graph theory consisting of nodes and links also knownas actors and relation, respectively. SNA can easily emphasizethe relative role and importance of each actor in the net-work [12]. In recent years, SNA has been applied on individualactors in verities of researches in different domains of scienceand art. In this paper, we propose to apply the data-miningbased technique to improve the accuracy and performance ofmultimodal biometric system using SNA. However, buildingthe SN among the individuals is one of the important and chal-lenging tasks. In this paper, we propose a robust algorithm toconstruct a SN to assist the analysis in biometric authentica-tion system. SN of the biometric data is constructed usingsimilarity and correlation (relationship) among the persons(actors).

A. Social Network Construction for Biometrics

A number of data mining based approaches can be usedfor the SN construction of training and testing biometrictemplates. We have used the extracted feature of individualbiometric traits to construct the network in two steps. In thefirst step, correlation similarity measure is calculated for indi-vidual biometric features of an individual. In the second step,links among the actors (individuals) are assigned if the nor-malized (range 0 to 1) correlation coefficient similarity valueis greater than a threshold.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

PAUL et al.: DECISION FUSION FOR MULTIMODAL BIOMETRICS USING SNA 5

1) Network Construction for Training: First, we calculateEuclidian distances among the features of each biometric trait.Distance values are then converted into similarity values andare normalized to the range of zero to one. Correlation coef-ficients are then calculated from the similarity values. If thecorrelation coefficient values are greater than the threshold, alink is established between the two actors. The steps of theSN construction for the biometric training set are given inAlgorithm 1.

Algorithm 1 Social Network Construction for TrainingStep 1: Compute the distance values for each sample of the

training set.Step 2: Convert the distance values into similarity values and

normalize to [0..1] range.Step 3: Find the correlation coefficient of the matrix.Step 4: Compute the adjacency matrix form the coefficient

matrix using the specified threshold.

2) Network Construction for Test Data: k-Nearest Neighboralgorithm (k-NN) computes the distance among the samplesto get the smallest distance form all the samples for the finaldecision. It provides ranking of the classification result basedof the similarity (distance). In the proposed method, we havetaken N top matches of the classifier. SN construction stepsare given in Algorithm 2.

Algorithm 2 Social Network Construction for TestingStep 1: Select the top N matched persons (sample) from the

k-NN classifier.Step 2: Replace the row and column of similarity measure

found from the training data set using the similaritymeasures of the given test sample and all samples inthe training data set.

Step 3: Find the adjacency matrix of the new similaritymatrix using the Social Network Construction forTraining algorithm (Algorithm 1).

Step 4: Repeat this process for each of N samples found fromthe classifier.

The algorithm outcome is the SN for the first N matches ofthe given sample using k-NN classifier.

B. SNA for Biometric Classification

Once the SN is constructed for training and testing, analy-sis is conducted to improve the confidence of the classifier.Formally, a Social Network (G) is represented as a graphwith vertex (V), edge (E) and weight (W), denoted asG = (V, E, W). Note that V is the set of actors (individuals)of the network, E is the relationship based on the similar-ity value (W). Build network is a 1-mode network becausethe actor is a person |V|×|V| and the relation weight is asimilarity value between two actors. After successful con-struction of the network, different metric operations can beutilized to obtain the knowledge about actors. The main met-rics used in proposed system are betweenness, eigenvector and

Fig. 3. Example of social network where the training set and the test personare used to calculate the centrality values. Dotted lines are the links to otherperson of the network.

degree centrality (DC). Centrality reflects the importance ofthe individual actors in the network.

The following example (depicted in Fig. 3) shows a SN builtbased on the features of Person 1 (P1), Person 2 (P2), andPerson 3 (P3). From the network, it is found that test Person1 (shown in a shaded circle) is related to Person 2, but P1from the training set is related to other samples of the trainingset. In a traditional classifier, classes from the different clusterscould not be used for the decision. However, SN can overcomethe problem and thus improve the confidence of the classifier.In this paper, we have used three samples for each person.The centrality values for each person preserve the relationshipamong the features in the network. If k-NN missed the directdistance, a centrality value from the SN training tells the clas-sifier about the feature similarity and correlation. Classifierestablishes the relationship among the classes, and tries toseparate different classes. Once the training process is done,classifier could no longer modify the relationship within theclass. In the SNA-based classifier, the decision is made basedon the SN. Fig. 3 shows an example of SN for training set.

To improve the confidence of the classifier, betweennesscentrality (BC), DC, eigenvalue centrality (EC), and clustercoefficient (CC) measures are used. The following sectionspresent definitions and calculation formulas for these centralityvalues. A single centrality value may not sufficient to identifythe relationship among the individuals; this is why it is nec-essary to utilize multiple centrality values to establish the SNclassifier.

1) Betweenness Centrality: Sociologist Freeman developedthe concept of BC [21]. It was introduced to find the mostimportant node of the SN [22]. If a SN can be defined as agraph G (V, E), the betweenness of a vertex v of the networkcan be computed using

Betweenness (v) =∑

p �=v �=q

σpq(v)

σpq. (10)

In the above equation, σpq is the total number of shortestpaths from node p to node q, σpq(v) is the number of paths thatpass through v in a network. The betweenness is calculatedfor each person in the constructed network from the trainingdataset.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

2) Degree Centrality: In 1979, Freeman [24] defined theDC as a count of number of edges incident upon a given node.Sode in 1989 proposed the new centrality measure namedk-path centrality [26]. DC is the special case of k-path central-ity when k is equal to one [25]. DC can be computed using theadjacency matrix of the network. Assume A is the adjacencymatrix of the computed network. Then (11) can be used tocompute the DC [24]

deg (v) =∑

j

Aij. (11)

In the biometric SN, DC defines how a person is linked toothers.

3) Eigenvector Centrality: In 1972, Bonacich [27] sug-gested that the eigenvector could be a good network centralitymeasure. This measure can be computed form the largesteigenvalue of an adjacency matrix [37]. The method assignsrelative scores to all nodes in the network. Assigning the scoredepends on the high-scoring and the low-scoring nodes [28].For a Social Network G (V, E), if |V| is the number of ver-tices and A is the adjacency matrix, the EC of node v can becomputed using (12) and (16) [27]

xv = 1

λ

t∈M(v)

xt = 1

λ

t∈G

av,t xt (12)

where M (v) is a set of neighbor node v and λ is a constant.λ can be computed from

Ax = λx. (13)

In EC measure, the eigenvector for the highest eigenvalues ischosen.

4) Clustering Coefficient: Watts and Strogatz introducedthe CC [29] in the context of the SNA. The CC is the ratio ofthe number of edges between a vertex’s neighbors to the totalpossible number of edges between the vertex’s neighbors [30].It is a measure of degree to which actors in a network tend tocluster together. The clustering coefficient C(G) of a graph Gis the average over the CCs of its nodes [33]. This coefficientcan be computed using

C(G) = 1

V

v∈V

c(v) (14)

where V is the set of actors v with c (v) = 2.

C. Decision Fusion of Multimodal Biometrics

The most important part of the proposed method is the deci-sion fusion. Fig. 4 presents the flowchart for making the finalmatching decision from the matching score and the metric val-ues. In that figure, P1, P2, and P3 are the person identifiers.k-NN stands for the matching score or similarity measure nor-malized in the range of zero to one. BC, CC, EC, and DC aredifferences between training and test BC, CC, EC, and DC val-ues. Each row represents the values for each person. Each rankof the person is based on the similarity value. The classifieroutputs the person containing the highest similarity value inthe table represented as k-NN. For example, the input face andthe input ear are classified as P2 and P3, respectively, based on

Fig. 4. Decision fusion process using matching scores and metric values ofthe social network.

the value of k-NN, since 0.0 and 0.85 are the highest similarityvalues. For the metric values, the smaller differences result inthe higher confident feedback to the classifier. In the proposedmethod, we have used not only the classifier rank for the sameclass, but also the relative measure for different classes. Usingthe SNs metric values provides the flexibility of using bothinter and intra class relative measures for classification.

In Fig. 4, the highlighted values are the smallest values forBC, EC, DC, and CC. When the k-NN score is highest, thematched person gets one point. For metric values, person withthe smallest score will get assigned one point. In this example,for the face, person P2 receives one point for the k-NN score,one for the CC score and one for the EC score. Person P3 getsone point assigned for the BC, and one for the DC. Person P1does not get any score assigned. Finally, person P2 receivesthree points, P3 receives two and P1 receives none. P2 winsbecause of the highest score. Similarly, for the ear biometric,P3 gets one point and P2 gets four point assigned. If twopersons obtain the identical scores, then the person with thehighest similarity score wins.

For the final decision-making, we combine the matrices forthe face and the ear. From the example, P3 is assigned thefinal score of four and P2 gets the final score of six. Basedon the experiments, we found that using the proposed SNAfusion could generate the final results that are highly accurateeven when the classifier decision is wrong. SNA also providesthe feedback to the classifier to improve the confidence level.

V. EXPERIMENTATION

To ensure the successful training, the database selection andthe preprocessing steps are necessary. For methodology test-ing, we utilize a virtual database that contains data formseveral different unimodal biometric databases of face, earand signature. The creation of the virtual database is basedon the assumption that the different biometric traits of thesame person are unique and independent.

A. Experiment Data

For the face dataset creation, FERET [42], VidTIMIT [35],and Olivetti Research Lab Database [36] were chosen. The

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

PAUL et al.: DECISION FUSION FOR MULTIMODAL BIOMETRICS USING SNA 7

facial recognition technology (FERET) database is a widelyused database for evaluation of face-recognition systems.The database was collected at George Mason University andthe U.S. Army Research Laboratory facilities, USA [42]. Theimages of the FERET database were collected using the 35 mmfilm camera with Kodak-ultra color film. Images are trans-ferred onto a CD-ROM through multiresolution technique ofKodak. Color images are then converted into a gray-scaleimage of resolution 256 × 384 and made publicly available forthe testing of the face recognition systems [42]. The FERETdatabase has 1199 subjects, where each subject has multiplefacial images.

The VidTIMIT database consists of 43 subjects. This is avideo database. Images for each subject are taken from thedifferent viewpoint. To generate the virtual database, we haveselected images that include different partial views of the sub-ject. The videos are stored in a sequence of JPEG images of512 × 384 resolutions [35].

The Olivetti Research Lab Database, also known as AT&Tdatabase of faces, contains a set of face images of 40 subjects.The database was used in the context of a face and speechrecognition project; there are ten varied images for each of the40 subjects. The images were taken at different times with var-ious lighting, facial expressions (open/closed eyes, smiling/notsmiling) and facial details (glasses/no glasses). The size ofeach image is 92 × 112 pixels, with 256 gray levels perpixel [36].

For ear dataset generation, the two databases: the Universityof Science and Technology Beijing (USTB) Image Database Iand Database II [38] were used to generate the virtualmultimodal Face-Ear database. Database I contains 66 sub-jects, with each subject having three grayscale images taken.Database II contains 77 subjects, where each subject has three300 × 400 grayscale images [38].

For signature data samples, we have used the University ofRajshahi signature database: RUSign [31]. RUSign consists of500 signatures, with ten signatures for each of 50 individuals.Signatures were collected between 2001 and 2005 using anEpson scanner. A high-pass filtering was used to remove somenoise from the input images [32].

We have taken a variety of databases and randomly com-bined them to generate the virtual database. Only the portion ofrandomly selected images from the facial databases was usedsince the face databases contain more images then ear andsignature databases. Similarly, ear images were also randomlyselected since their number was higher than the number ofsignature images. To generate the virtual database, all imagesfrom the signature database were used. Table I shows the vir-tual database information for the five sets of virtual databases.A sample of virtual multimodal biometric database is shownin Fig. 5.

B. Implementation and Design

We have designed the multibiometric system usingMATLAB 2009b and C# on an Intel Core i7 2.2 GHzWindows 7 Enterprise workstation. Menu and toolbar drivengraphical user interface (GUI) has been developed using

TABLE ISAMPLES TAKEN RANDOMLY FROM DIFFERENT UNIMODAL DATABASES

FOR VIRTUAL MULTIMODAL DATABASE GENERATION

Fig. 5. Samples from the virtual multimodal database. Samples are randomlyselected from different databases to create an individual identity.

C# that supports both the 32-bit and the 64-bit version ofWindows. The user interface has several mini-tools to gener-ate the virtual database, preprocess the database, change theclassification parameters for the SN construction, adapt thethreshold for the network, and others. The multimodal vir-tual database is preprocessed and saved as MATLAB standarddatabase file with mat extension. Each biometric trait is scaledto the 75 × 50-resolution grayscale bitmap image. A speciallydesigned GUI includes an option to select the database. Assoon as the database is connected, the system automaticallyretrieves all the dimension information and the number ofsamples from the database. The interface also supports theconfiguration window for the performance analysis. The devel-oped system has the capability of automatically processingbiometric samples of different resolution. User can also inputthe number of folds for the cross validation process in theconfiguration window. To improve the training and the test-ing process, the tenfold cross validation of the dataset is used.All the results presented are obtained using tenfold cross val-idation. The system can automatically create the dataset fortenfolds to use them for both training and testing. The topten matches are also displayed for the analysis. The system’s

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 6. User interface of the multimodal biometric system designed in C#and MATLAB. Designed interface automatically connect to MATLAB andthe virtual database after systems initialization.

input and output are configurable; thus user can add, edit,and remove the workspaces for different configuration of thesystem. Fig. 6 shows the user interface.

C. Performance Measure of the Biometric System

The system is tested on classification accuracy and conver-gence of the SN centrality measures. The results of unimodalface, ear and signature with and without SN confidence levelare also compared. For the final experiment, the tenfold crossvalidation is used on the virtual multimodal database employ-ing k-NN (k = 1) classifier and the SN confidence level. Anaverage result obtained from sets 1 to 5 combined (SET 01-05)is used for the reporting.

The FAR and the false rejection rate (FRR) are the twoimportant biometric performance characteristics. A goal of thebiometric system is to make a decision whether the personis a genuine or an imposter [13]. This means that for eachtype of decision, true and false are the two possible outcomes.The FAR is the probability of an impostor being accepted asa genuine individual [13]. The FRR is the probability of agenuine individual being rejected as an impostor [13]. Genuineacceptance rate (GAR) is also used to evaluate the biometricsystem performance. The GAR is defined as 1-FRR [13]. ROCcurve is often used to plot the performance of the biometricsystem where GAR is represented along the y-axis and FARis along the x-axis.

D. Performance Analysis

1) Convergence of Social Network: Once the SN is con-structed, it is very important to check the distribution of thevalues for each person of the training set. Uniqueness of thecentrality measure is important to provide the feedback tothe classifier. Since we need the uniqueness of centrality val-ues for each person to ensure proper recognition, we haveconstructed the network for different threshold values of thecorrelation coefficients. This process is similar to the train-ing of a classifier. For the lower correlation coefficient values,the different persons may get the similar centrality values.The network construction that uses 60% of the correlationcoefficients results in the better centrality convergence. From

Fig. 7. Distribution of betweenness centrality for training data, values arenormalized in the range from zero to one.

Fig. 8. Distribution of the degree centrality for the training data, values arenormalized in the range zero to one.

the convergence of each centrality measure, it can be estab-lished that the SN can be used as a classifier. This enhancesthe confidence in the classifier output. The convergence ofthe SN centrality values is shown in Figs. 7–10. In thesefigures, y-axis represents the centrality values of constructednetwork and x-axis represents the number of persons in thetraining set. Different centrality values have varied distributionof uniqueness for different thresholds.

Betweenness value specifies how a person in a network isrelated to other persons. Once the training image is replacedwith the test image, the person of the same class becomesmore important for that network. For BC, the convergencecan be improved by selecting more correlation coefficients.BC keeps uniqueness for 65% of correlation coefficients. Thedistribution of the betweenness is shown in Fig. 7.

DC measures the person’s link to the training image. Itkeeps the links with the same class and replaces the links with

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

PAUL et al.: DECISION FUSION FOR MULTIMODAL BIOMETRICS USING SNA 9

Fig. 9. Distribution of Eigenvector centrality for the training data, valuesare normalized in the range zero to one.

other classes with the null value. Fig. 8 shows computed DCvalues for the sample training set. Four different thresholds areused to calculate the centrality measures. For 30% and 45% ofcorrelation coefficients, the DC values changed slightly. It isdifficult to find the difference between two centrality values.When the threshold is higher, the differences among individ-uals become more visible. On the contrary, for BC, sometimehigher threshold may result in the worst convergence.

EC keeps the optimal convergence for the 65% of cor-relation coefficient values. Fig. 9 shows the EC distributionfor different correlation coefficient thresholds. For the con-vergence, smaller or higher threshold may result in similarcentrality values.

For the CC, convergence is good for both 60% and 65% ofcoefficient values. The CCs for different threshold values areshown in Fig. 10.

From the analysis of the metric values, we found that differ-ent metrics result in a better distribution of values for differentthreshold values. We found that BC measure is best for 65%of the correlation values. For CC, DC, and EC, the 60% ofthe correlation values result in the best distribution of the SNmetric values.

2) Recognition Performance: From the experiments, weobserve that the proposed multimodal biometric system, cou-pled with the SNAresults in the better performance comparedagainst both the unimodal and the multimodal biometric sys-tem. The detailed experimental comparison in terms of FARand GAR for individual and multimodal approaches with andwithout SNA are shown in Figs. 11–14.

For face recognition using Fisherface and k-NN, the clas-sification accuracy of the system is 92%. If we improve theclassifier confidence using the SNA and combine the confi-dence level with the classifier ranks, the final performance isimproved by 2% with zero tolerance of errors. If error toler-ance is increased, we can obtain close to 100% accuracy with12% error tolerance without employing the SNA. ApplyingSNA, the error rate can be reduced by 4% with the same

Fig. 10. Distribution of clustering coefficients for the training data. Valuesare normalize in the range zero to one.

Fig. 11. ROC curves for Face versus Face + SNA method. The improvementshown before and after enhancing the confidence of the classifier.

100% accuracy. Similarly, for ear as an individual biometrictrait, the error rate decreases by 2%. Finally, for signature bio-metric, the error rate is decreased by 1% with the classificationaccuracy improved by 4%.

For the multimodal decision fusion approach for face, earand signature, the classification accuracy is increased by 8%and the error rate is decreased by 8% using the SN.

It has been established that different fusion methods can beused for the decision fusion. In this paper, we have utilized asimple voting method for the decision fusion. Our main goalwas to establish that SNA can be used for decision fusion andto improve the confidence level of a classifier. Thus, we didnot need to use more advanced voting. We have comparedthe performance of the decision fusion based on the proposedSNA method against the state-of-the-art highest rank (HR) andthe BC methods. Fig. 15 shows the comparison of differentfusion method and the SNA fusion.

As it can be seen from Fig. 15, the SNA based decisionfusion gives better recognition performance than both the BC

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 12. ROC curves for Ear versus Ear + SNA method. The improvementshown before and after enhancing the confidence of the classifier.

Fig. 13. ROC curves for Signature versus Signature + SNA method. Theimprovement before and after enhancing the confidence of the classifier.

and the HR methods. It achieves 100% GAR with 5% FAR,while other methods achieve this result only with 12% FARvalues.

The SN provides the relative measures among the personsdiscriminating features. Although the Fisher face based featureextraction loses some features, it still shows an acceptableperformance. If the full feature set is taken, the performanceof the decision fusion is improved by 2%, at the expense of thetime complexity increase of 60% with the memory increase of75%. EC, BC, DC, and CC have all linear time complexity.Thus, the time complexity of SNA in the biometric decisionfusion is O (n).

E. Statistical t-Test of the Performance

Statistical testing of the recognition performance was per-formed on 2250 images. Faces, ears and signatures wererandomly selected from the different databases. One-tailed andpaired t-tests were performed to study the statistical signifi-cance of accuracy improvement after using SN using tenfoldcross validation. The feature sets tested were Sa = FACE +SNA, Sb = EAR + SNA, Sc = SIGNATURE + SNA andSd = FACE + EAR + SIGNATURE + SNA. We tested the

Fig. 14. ROC curves for improved performance analysis of the proposed mul-timodal biometric system. Results for Face +SNA versus Ear + SNA versusSignature + SNA versus Face + Ear + Signature + SNA.

Fig. 15. ROC curves for biometric fusion using social network analysis,Borda Count, and Highest Rank methods.

TABLE IISTATISTICAL T-TEST RESULT FOR TENFOLD

CROSS VALIDATION RESULTS

significance of differences between Sa-Sd, Sb-Sd, and Sc-Sdpairs. The differences in the test scores had exhibited the nor-mal distributions. A significance level of α = 0.05 was used.Table II shows the t-values and the p-values for different per-formance measures. We have accepted all measures becausein every case p < 0.05. The result indicates that the feature Sd= S2+S4+S5 has significant performance gain over the otherfeatures Sa, Sb, and Sc.

VI. CONCLUSION

In this paper, a robust algorithm for multimodal biomet-ric system using SNA and confidence-based decision fusion is

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

PAUL et al.: DECISION FUSION FOR MULTIMODAL BIOMETRICS USING SNA 11

presented. An algorithm for virtual SN construction is used togenerate the test cases. The performance of the algorithm isstudied under various configurations of the multimodal biomet-ric system. The proposed methodology reduces the FARs forboth single biometric traits and multimodal biometrics whenthe SNA is employed. In the decision fusion scheme, eachdecision is made after the improvement of the classifier confi-dence. This paper introduces the concept of the SN classifierthat can independently classify an actor from the relation-ship among actors. We have tested the possibility of usingthe SN classifier as a supporting classifier. It can be used toimprove the confidence level of other classifiers regardless oftheir nature.

Future direction for improvement includes the study ofdifferent approaches for centrality analysis in the SN.Experimenting with other types of biometric fusions in thecontext of confidence-based SN classifier is also worth lookinginto.

REFERENCES

[1] S. Prabhakar, S. Pankanti, and A. K. Jain, “Biometric recognition:Security and privacy concerns,” IEEE Secur. Privacy, vol. 1, no. 2,pp. 33–42, Mar./Apr. 2003.

[2] A. K. Jain, P. Flynn, and A. Ross, Handbook of Biometrics. New York,NY, USA: Springer, 2007.

[3] M. P. Down and R. J. Sands, “Biometrics: An overview of the technol-ogy, challenges and control considerations,” Inf. Syst. Control J., vol. 4,pp. 53–56, 2004.

[4] A. Ross, K. Nandakumar, and A. K. Jain, Handbook of Multibiometrics.New York, NY, USA: Springer-Verlag, 2006.

[5] A. K. Jain and A. Ross, “Fingerprint mosaicking,” in Proc. IEEE Int.Conf. Acoust. Speech Signal Process., vol. 4. Orlando, FL, USA, 2002,pp. 4064–4067.

[6] A. Ross and R. Govindarajan, “Feature level fusion using hand and facebiometrics,” in Proc. SPIE 2nd Conf. Biometric Technol. Human Identif.,Orlando, FL, USA, 2004, pp. 196–204.

[7] K. Chang, K. W. Bower, S. Sarkar, and B. Victor, “Comparison andcombination of ear and face images in appearance-based biometrics,”IEEE Trans. Pattern Anal., Mach. Intell., vol. 25, no. 9, pp. 1160–1165,Sep. 2003.

[8] G. L. Marcialis and F. Roli, “Fingerprint verification by fusion of opti-cal and capacitive sensors,” Pattern Recognit. Lett., vol. 25, no. 11,pp. 1315–1322, 2004.

[9] A. Ross and A. K. Jain, “Information fusion in biometrics,” PatternRecognit. Lett., vol. 24, no. 13, pp. 2115–2125, 2003.

[10] T. Kinnunen, V. Hautamäki, and P. Fränti, “Fusion of spectral feature setsfor accurate speaker identification,” in Proc. 9th Conf. Speech Comput.,2004, pp. 361–365.

[11] G. Shakhnarovish, T. Darrell, and P. Indyk, Nearest-Neighbor Methodsin Learning and Vision. Cambridge, MA, USA: MIT Press, 2005.

[12] U. Brandes and C. Pich, “Centrality estimation in large networks,”Int. J. Bifurcation Chaos, vol. 17, no. 7, pp. 2303–2318, 2007.

[13] L. Hong and A. K. Jain, “Integrating faces and fingerprints for personalidentification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12,pp. 1295–1307, Dec. 1998.

[14] R. Frischholz and U. Dieckmann, “BiolD: A multimodal biomet-ric identification system,” Computer, vol. 33, no. 2, pp. 64–68,Feb. 2000.

[15] J. Fierrez-Aguilar, J. Ortega-Garcia, D. Garcia-Romero, andJ. Gonzalez-Rodriguez, “A comparative evaluation of fusion strate-gies for multimodal biometric verification,” in Proc. 4th Int. Conf.Audio- Video-Based Biometric Person Authentication. Lecture Notes inComputer Science, vol. 2688. Guildford, U.K., 2003, pp. 830–837.

[16] M. L. Gavrilova and M. Monwar, “Pattern recognition and biometricfusion,” in Pattern Recognition, Machine Intelligence and Biometrics(PRMIB), K.-S. Fu and P. Wang, Eds. Springer, 2011, pp. 657–674.

[17] R. Yampolskiy and M. Gavrilova, “Artimetrics: Biometrics for artifi-cial entities,” IEEE Robot. Autom. Mag., vol. 19, no. 4, pp. 48–58,Dec. 2012.

[18] K. Ahmadian and M. Gavrilova, “On-demand chaotic neural network forbroadcast scheduling problem,” J. Supercomput., vol. 59, pp. 811–829,Feb. 2012.

[19] M. Monwar and M. Gavrilova, “A novel fuzzy multimodal infor-mation fusion technology for human biometric traits identification,”in Proc. 10th IEEE Int. Conf. Cognit. Inf. Cogn. Comput. (ICCI*CC),Banff, AB, Canada, 2011, pp. 112–119.

[20] A. K. Jain, K. Nandakumar, and A. Ross, “Score normalization inmultimodal biometric systems,” Pattern Recognit., vol. 38, no. 12,pp. 2270–2285, 2005.

[21] M. E. J. Newman, Networks: An Introduction. Oxford, U.K.: OxfordUniv. Press, 2010.

[22] L. Freeman, “A set of measures of centrality based upon betweenness,”Sociometry, vol. 40, no. 1, pp. 35–41, 1977.

[23] U. Brandes, “A faster algorithm for betweenness centrality,” J. Math.Sociol., vol. 25, no. 2, pp. 163–177, 2001.

[24] L. C. Freeman, “Centrality in networks conceptual clarification,”Soc. Netw., vol. 1, no. 3, pp. 215–239, 1979.

[25] S. P. Borgatti and M. G. Everett, “A graph-theoretic perspective oncentrality,” Soc. Netw., vol. 28, no. 4, pp. 466–484, 2006.

[26] D. S. Sade, “Sociometrics of Macaca mulatta III: N-path centrality ingrooming networks,” Soc. Netw., vol. 11, no. 3, pp. 273–292, 1989.

[27] P. Bonacich, “Factoring and weighting approaches to clique identifica-tion,” J. Math. Sociol., vol. 2, no. 1, pp. 113–120, 1972.

[28] P. Bonacich, “Some unique properties of eigenvector centrality,”Soc. Netw., vol. 29, no. 4, pp. 555–564, Oct. 2007.

[29] P. W. Holland and S. Leinhardt, “Transitivity in structural models ofsmall groups,” Comput. Group Stud., vol. 2, no. 2, pp. 107–124, 1971.

[30] R. D. Luce and A. D. Perry, “A method of matrix analysis of groupstructure,” Psychometrika, vol. 14, no. 1, pp. 95–116, 1949.

[31] RUSign—University of Rajshahi, Bangladesh Signature Database, 2005.[32] R. C. Gonzalez and P. Wintz, Digital Image Processing, 2nd ed. Upper

Saddle River, NJ, USA: Pearson, 2002.[33] T. Schank and D. Wagner, “Approximating clustering coefficient and

transitivity,” J. Graph Algorith. Appl., vol. 9, no. 2, pp. 265–275, 2005.[34] M. Monwar and M. L. Gavrilova, “Multimodal biometric system

using rank-level fusion approach,” IEEE Trans. Syst., Man, Cybern. B,Cybern., vol. 39, no. 4, pp. 867–878, Aug. 2009.

[35] C. Sanderson and K. K. Paliwal, “Polynomial features for robust faceauthentication,” in Proc. IEEE Int. Conf. Image Processing (ICIP),vol. 3. 2002, pp. 997–1000.

[36] F. Samaria and A. Harter, “Parameterization of a stochastic model forhuman face identification,” in Proc. 2nd IEEE Workshop Appl. Comput.Vis., Sarasota, FL, USA, 1994, pp. 138–142.

[37] C. Perpinan, “Compression neural networks for feature extraction:Application to human recognition from ear images,” M.S. thesis, FacultyInformat., Tech. Univ. Madrid, Madrid, Spain, 1995.

[38] (2007, Apr.). USTB Ear Database [Online]. Available:http://www.ustb.edu.cn/resb/

[39] L. Yuan and M. Zhi-chun, “Ear recognition based on 2D images,”presented at the 1st IEEE Int. Conf. Biometrics Theory, Appl., Syst.,Washington, DC, USA, 2007.

[40] N. Kawasaki, “Parametric study of thermal and chemical nonequilibriumnozzle flow,” M.S. thesis, Dept. Electron. Eng., Osaka Univ., Osaka,Japan, 1993.

[41] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs.Fisherfaces: Recognition using class specific linear projection,” IEEETrans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711–720, Jul. 1997.

[42] J. P. Phillips, H. Wechsler, and P. Rauss, “The FERET database and eval-uation procedure for face-recognition algorithms,” Image Vis. Comput.,vol. 16, no. 5, pp. 295–306, 1998.

[43] M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cogn.Neurosci., vol. 3, no. 1, pp. 71–86, 1991.

[44] D. Randall and R. T. Martinez, “The general inefficiency of batchtraining for gradient descent learning,” Neural Netw., vol. 16, no. 10,pp. 1429–1451, 2003.

[45] R. A. Fisher, “The use of multiple measurements in taxonomic prob-lems,” Ann. Eugenics, vol. 7, no. 2, pp. 179–188, 1936.

[46] M. Gavrilova and M. Monwar, Multi-Modal Biometrics and IntelligentImage Processing for Security Systems. IGI, 2013.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Padma Polash Paul received the B.Sc. degreein computer science and engineering from theUniversity of Rajshahi, Rajshahi, Bangladesh, in2006 and the M.Phil. degree in computer sciencefrom the City University of Hong Kong, Hong Kong,in 2010. He is currently pursuing the Ph.D.degree from the Department of Computer Science,University of Calgary, Calgary, AB, Canada.

He joined the Department of Computer Scienceand Engineering, Ahsanullah University of Scienceand Technology, Dhaka, Bangladesh, as a Faculty

Member in 2007. He taught there for a year and started higher study. He hasauthored over 25 international journal, conference papers, and book chapters.

Marina L. Gavrilova received the Diploma(Hons) from Lomonosov Moscow State University,Moscow, Russia, and the Ph.D. degree from theUniversity of Calgary, Calgary, AB, Canada.

She holds a tenured Associate Professor posi-tion with the Department of Computer Science,University of Calgary. She is the Founder andCo-Director of two research laboratories: theBiometric Technologies Laboratory and theSPARCS Laboratory for Spatial Analysis inComputational Sciences. Her current research

interests include computational geometry, image processing, optimization,and biometric modeling. Her publications include over 150 journal andconference papers, edited special issues, books, and book chapters.

Dr. Gavrilova is a Founding Editor-in-Chief of the Springer Transactionson Computational Sciences Journal, and serves on editorial boards ofthe Visual Computer, International Journal of Biometrics, Journal ofSupercomputing, and seven other journals. She has created an InternationalConference on Computational Sciences and Its Application Series, CGAWorkshop series, and served as the Co-Chair for WADS, CW, WSCG, andISVD Conferences. Her research was featured in newspapers and on the TV,including Science Digest, “Live Sciences!” Exhibit at National Museum ofCivilization, Canada and on Discovery Channel, Canada.

Reda Alhajj received the B.Sc. degree in com-puter engineering from the Middle East TechnicalUniversity (METU), Ankara, Turkey, in 1988. Afterhe completed the B.Sc. with distinction from METU,he was offered a full scholarship to join the graduateprogram in Computer Engineering and InformationSciences at Bilkent University, Ankara, where hereceived the M.Sc. and Ph.D. degrees in 1990 and1993, respectively.

He is currently a Professor with the Department ofComputer Science, University of Calgary, Calgary,

AB, Canada. He has published over 280 papers in refereed international jour-nals and conferences.

Dr. Alhajj served on the program committee of several international con-ferences, including IEEE ICDE, IEEE ICDM, IEEE IAT, and SIAM DM.He also served as a Guest Editor of several special issues and is currentlythe Program Chair of IEEE IRI 2009, CaSoN 2009, ASONAM 2009, ISCIS2009, MS 2009, and OSINT-WM 2009. He is on the editorial board of severaljournals and an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS,MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS. He hasa research group of ten Ph.D. and eight M.Sc. students researching primarilyin the areas of biocomputing and biodata analysis, data mining, multiagentsystems, schema integration and reengineering, social networks, and XML.He received the Outstanding Achievements in Supervision Award from theFaculty of Graduate Studies at the University of Calgary.


Recommended