+ All Categories
Home > Documents > Deeply Learned Pose Invariant Image Analysis with...

Deeply Learned Pose Invariant Image Analysis with...

Date post: 27-Jul-2020
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
22
Research Article Deeply Learned Pose Invariant Image Analysis with Applications in 3D Face Recognition Naeem Ratyal , 1,2 Imtiaz Ahmad Taj , 2 Muhammad Sajid , 1,2 Anzar Mahmood , 1 Sohail Razzaq, 3 Saadat Hanif Dar, 4 Nouman Ali, 4 Muhammad Usman, 1 Mirza Jabbar Aziz Baig, 1 and Usman Mussadiq 1 1 Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur-10250, AJK, Pakistan 2 Vision and Pattern Recognition Systems Research Group, Capital University of Science and Technology, Islamabad-45750, Pakistan 3 Department of Electrical Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad-22060, Pakistan 4 Department of Soſtware Engineering, Mirpur University of Science and Technology (MUST), Mirpur-10250, AJK, Pakistan Correspondence should be addressed to Muhammad Sajid; sajid [email protected] Received 19 February 2019; Accepted 27 May 2019; Published 17 June 2019 Academic Editor: Bogdan Smolka Copyright © 2019 Naeem Ratyal et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Face recognition aims to establish the identity of a person based on facial characteristics and is a challenging problem due to complex nature of the facial manifold. A wide range of face recognition applications are based on classification techniques and a class label is assigned to the test image that belongs to the unknown class. In this paper, a pose invariant deeply learned multiview 3D face recognition approach is proposed and aims to address two problems: face alignment and face recognition through identification and verification setups. e proposed alignment algorithm is capable of handling frontal as well as profile face images. It employs a nose tip heuristic based pose learning approach to estimate acquisition pose of the face followed by coarse to fine nose tip alignment using L 2 norm minimization. e whole face is then aligned through transformation using knowledge learned from nose tip alignment. Inspired by the intrinsic facial symmetry of the Leſt Half Face (LHF) and Right Half Face (RHF), Deeply learned (d) Multi-View Average Half Face (d-MVAHF) features are employed for face identification using deep convolutional neural network (dCNN). For face verification d-MVAHF-Support Vector Machine (d-MVAHF-SVM) approach is employed. e performance of the proposed methodology is demonstrated through extensive experiments performed on four databases: GavabDB, Bosphorus, UMB-DB, and FRGC v2.0. e results show that the proposed approach yields superior performance as compared to existing state-of-the-art methods. 1. Introduction Face recognition is the ability of a biometric system to scan, store, and identify [1] human faces. It has become a fertile field for researchers due to various applications in computer vision and pattern recognition [2]. It has found numerous real-time applications in access control, security, surveillance, criminal identification, fraud detection, and even in human computer interaction [3, 4] based on machine learning algorithms [5]. e advantage of face recognition over other biometric modalities is noninvasive acquisition, social acceptance, and appropriateness for noncooperative scenarios [6]. e term face identification is coined for matching a subject’s face template with every face template in the gallery [7]. On the other hand, face verification aims to identify a face template against the claimed identity [8]. e challenging problems of face recognition include Pose, Expression, and Illumination (PIE) variations [9]. Despite the great success made over the past, robust face recognition using 2D intensity images is still a significant challenging problem in the presence of PIE vari- ations [10]. Recently, with the rapid growth of 3D acquisition, imaging, visualization, and reconstruction [11, 12] techniques, 3D shape analysis, and processing such as surface registration and shape retrieval have been extensively studied. 3D shapes Hindawi Mathematical Problems in Engineering Volume 2019, Article ID 3547416, 21 pages https://doi.org/10.1155/2019/3547416
Transcript
Page 1: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Research ArticleDeeply Learned Pose Invariant Image Analysis with Applicationsin 3D Face Recognition

Naeem Ratyal 12 Imtiaz Ahmad Taj 2

Muhammad Sajid 12 Anzar Mahmood 1

Sohail Razzaq3 Saadat Hanif Dar4 Nouman Ali4 Muhammad Usman1

Mirza Jabbar Aziz Baig1 and UsmanMussadiq 1

1Department of Electrical Engineering Mirpur University of Science and Technology (MUST) Mirpur-10250 AJK Pakistan2Vision and Pattern Recognition Systems Research Group Capital University of Science and Technology Islamabad-45750 Pakistan3Department of Electrical Engineering COMSATS University Islamabad Abbottabad Campus Abbottabad-22060 Pakistan4Department of Software Engineering Mirpur University of Science and Technology (MUST) Mirpur-10250 AJK Pakistan

Correspondence should be addressed to Muhammad Sajid sajid faizhotmailcom

Received 19 February 2019 Accepted 27 May 2019 Published 17 June 2019

Academic Editor Bogdan Smolka

Copyright copy 2019 Naeem Ratyal et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Face recognition aims to establish the identity of a person based on facial characteristics and is a challenging problemdue to complexnature of the facial manifold A wide range of face recognition applications are based on classification techniques and a class labelis assigned to the test image that belongs to the unknown class In this paper a pose invariant deeply learned multiview 3D facerecognition approach is proposed and aims to address two problems face alignment and face recognition through identification andverification setupsThe proposed alignment algorithm is capable of handling frontal as well as profile face images It employs a nosetip heuristic based pose learning approach to estimate acquisition pose of the face followed by coarse to fine nose tip alignment usingL2 norm minimization The whole face is then aligned through transformation using knowledge learned from nose tip alignmentInspired by the intrinsic facial symmetry of the Left Half Face (LHF) and Right Half Face (RHF) Deeply learned (d) Multi-ViewAverage Half Face (d-MVAHF) features are employed for face identification using deep convolutional neural network (dCNN) Forface verification d-MVAHF-Support Vector Machine (d-MVAHF-SVM) approach is employed The performance of the proposedmethodology is demonstrated through extensive experiments performed on four databases GavabDB Bosphorus UMB-DB andFRGC v20 The results show that the proposed approach yields superior performance as compared to existing state-of-the-artmethods

1 Introduction

Face recognition is the ability of a biometric system to scanstore and identify [1] human faces It has become a fertile fieldfor researchers due to various applications in computer visionand pattern recognition [2] It has found numerous real-timeapplications in access control security surveillance criminalidentification fraud detection and even in human computerinteraction [3 4] based on machine learning algorithms[5] The advantage of face recognition over other biometricmodalities is noninvasive acquisition social acceptance andappropriateness for noncooperative scenarios [6] The term

face identification is coined for matching a subjectrsquos facetemplate with every face template in the gallery [7] On theother hand face verification aims to identify a face templateagainst the claimed identity [8] The challenging problems offace recognition include Pose Expression and Illumination(PIE) variations [9] Despite the great success made over thepast robust face recognition using 2D intensity images is stilla significant challenging problem in the presence of PIE vari-ations [10] Recently with the rapid growth of 3D acquisitionimaging visualization and reconstruction [11 12] techniques3D shape analysis and processing such as surface registrationand shape retrieval have been extensively studied 3D shapes

HindawiMathematical Problems in EngineeringVolume 2019 Article ID 3547416 21 pageshttpsdoiorg10115520193547416

2 Mathematical Problems in Engineering

(a) (b) (c) (d) (e)

Figure 1 Subject Bosphorus bs000 (a) LPF (b) RPF (c) FF (d) face showing facial symmetry in (e) LHF and RHF

make a wide array of new kinds of application possibleeg 3D biometrics 3D medical imaging 3D remote sensingvirtual reality augmented reality and 3D humanndashmachineinteraction [13] As a special application of 3D shape analysisand processing the key issue of 3D face recognition has alsobeen widely addressed and identified to bemuchmore robustto varying poses and illumination changes [14] Thereforepose invariant face recognition using 3D models is provingto be promising especially in case of in-depth pose variationsalong x- y- and z-axis under unconstrained acquisitionscenarios of real world [15] Encouraged by these lines ofevidence many 3D face recognition approaches have beenevolved and experimented in the last few years as given in thework of Bowyer et al [16] and the literature reviews [17ndash20]

The existing 3D face recognition approaches can begrouped into holistic local feature-based and hybriddomains [20] Under the holistic paradigm PrincipalComponent Analysis (PCA) [21 22] Linear DiscriminantAnalysis (LDA) [21 23] and Independent ComponentAnalysis (ICA) [24] are based on subspace learningLocal feature-based methods employ features from localdescriptive points curves and regions In point basedmethods a curvelet transform based study is proposed in theresearch work [18] to find salient points on the face images forbuilding multiscale local surfaces Similarly facial key pointsare detected exploiting meshSIFT algorithm in the pointbased methods [19 25] A prominent curve based methodemploying Riemannian framework is presented in the paper[26] whereas the study [27] is a representative regionbased approach for occlusions and missing data handlingproblem Hybrid approaches employ both of holistic andlocal feature-based methods [28] or combination of 2D and3D images in the face recognition process [14]

The most crucial stage in any 3D face recognition algo-rithm is face alignment and the resulting accuracy primarilydepends on the robustness of the alignment module [29] Inalignment phase facial features are transformed such thatthey can be reliably matched A few 3D alignment techniques[29ndash34] existing in literature are based on Iterative ClosestPoint (ICP) [30] Intrinsic Coordinate System (ICS) [31]Simulated Annealing (SA) [32] and Average Face Model(AFM) [29]

Solutions enabling recognition of subjects captured underarbitrary poses are now attracting an increasing interestProfile image based face recognition [35] is an example of

such a case where left or right profile face images are usedin the recognition process Left profile face (LPF) images aredefined as the face images where a subject presents hisherface rotated -90∘ (please see Figure 1(a)) whereas in rightprofile face (RPF) images the subjectrsquos face is rotated at+90∘ around the vertical axis in xz plane [36] (Figure 1(b))Please note that frontal face (FF) images are captured withsubjects facing towards camera at different angles as shownin Figure 1(c) The face alignment and recognition of LPFRPF and FF is a challenging problem in 2D intensity imagesIn the proposed study our aim is to present a novel 3D facealignment and recognition algorithm to deal with FF LPFand RPF images For face identification Multi-View WholeFace (MVWF) images are synthesized to integrate real 3Dfacial feature information that boosts the face recognitionaccuracy Motivated by the intrinsic symmetry of a face[37] (Figure 1(d)) exhibited by the LHF and RHF images(Figure 1(e)) Multi-View LHF (MVLHF) images and Multi-View RHF (MVRHF) images are combined into Multi-View Average Half Face (MVAHF) images Subsequently d-MVAHF features are extracted and employed in the facerecognition process using dCNN For a comparative evalu-ation experimental results are also reported for d-MVWF d-MVLHF and d-MVRHF features on all four databases

For performance evaluation of the proposed approachfour benchmark databases namely GavabDB [36] Bospho-rus [38] UMB-DB [39] and FRGC v20 [40] have beenused in this study These databases carry pose and expressionvariations and are commonly utilized for developing 3Dface recognition algorithms For example the algorithmspresented in state-of-the-art studies [17 41ndash43] employedFRGC v20 database while those presented in Refs [19 2628 44ndash46] are based on both of FRGC v20 and GavabDBdatabases Similarly the studies [26 28 47] employed Gav-abDB Bosphorus and FRGC v20 databases while the study[27] used UMB-DB to evaluate the algorithm The maincontributions and novelty of the proposed algorithm are asfollows

(1) The first contribution of this study is a novel 3Dalignment algorithm that can deal with neutral andexpressive FF LPF and RPF images in face recog-nition applications The proposed algorithm differsfrom the conventional alignment approaches in twoaspects (i) It does not align two face images to each

Mathematical Problems in Engineering 3

other rather it is capable of aligning a standaloneprobe face image (PFI) and (ii) it employs a nosetip heuristic based pose learning approach The poselearning approach first estimates acquisition poseof the PFI Subsequently an L2 norm minimizationbased coarse to fine alignment approach is employedthat initially aligns the nose tip of the PFI This isfollowed by a transformation step to align the wholefacial surface in a single 3D rotation The proposedalgorithm is referred to as pose learning-based coarseto fine (PCF) alignment algorithm in the rest of thestudy

(2) The second contribution is a novel deeply learnedapproach for image analysis with applications in3D face recognition The proposed d-MVAHF-basedface identification and d-MVAHF-SVM-based faceverification approach employs face images oriented at0∘ 10∘ 20∘ and 30∘ For a comparative evaluation theproposed approach is tested using d-MVWF imagesd-MVLHF images and d-MVRHF images oriented at0∘plusmn10∘plusmn20∘ and plusmn30∘ 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ respectivelyTheproposed algorithmis also validated using deeply learned multiview LPF(d-MVLPF) and deeply learned multiview RPF (d-MVRPF) images oriented at 0∘ -10∘ -20∘ and -30∘and 0∘ 10∘ 20∘ and 30∘ respectively

(3) Third contribution is (i) the study of the role ofpose learning and nose tip alignment in reducing thecomputational complexity of the PCF alignment algo-rithm for face recognition applications and (ii) thecomputational complexity analysis of the d-MVAHF-based face recognition compared to d-MVWF basedface recognition for biometric applications

Rest of the study is organized as follows Related workis presented in Section 2 Section 3 deals with the detailsof proposed 3D alignment and face recognition algorithmExperiments and results are given in Section 4 whereas thediscussion and conclusions are presented in Sections 5 and 6respectively

2 Related Work

For a thorough survey of research in the area of 3D facerecognition and applications the reader is referred to thestudies [16 48] The related work in the context of bothcomponents of the current study ie 3D face alignment and3D face recognition is discussed separately

21 3D Face Alignment Algorithms Review of the existing3D face alignment algorithms namely ICP [30] ICS [31] SA[32] and AFM [29] is given as follows

ICP [30 34] based algorithm aligns two 3D faces byminimizing the distance between them iteratively Limi-tations of the ICP include initial course alignment andslow convergence The drawbacks of ICP technique limitits applicability to only verification setup where the PFI isto be aligned to the claimed identity image only [15] They

become an issue in identification setup where a probe isto be aligned to the whole gallery The second methodalignment to an ICS [31] mainly involves localization oflandmarks on 3D facial images comparison of landmarksto corresponding points on ICS and a transformation phaseto finish alignment The downside of this method includeslow accuracy in localization of landmarks especially for theface images with pose and expression variations A singlealignment event required for a probe to align it to ICSmakes this technique appropriate for identification as wellas verification scenarios [15] SA [32] algorithm employs astochastic technique using a local search based approachand its drawback is excessive time consumption [33] Similarto ICP this method is suitable for verification setup onlyIn AFM [29] based alignment the AFM is constructed bydemarcating and averaging landmarks on the facial imagesand the probe image is aligned to AFM only once Thisaspect empowers this method to be used as an alignmenttechnique in both of face identification and verification setups[15] A significant disadvantage of the AFM based methodis probe imagersquos less accurate alignment to an AFM due toloss of spatial information involved in averaging process [15]Another fast and effective face alignment method is proposedin the work of Wang et al [34] to place each face modelto a standard position and orientation It does not align aprobe image to every image in the gallery therefore it canbe employed for both of face verification and identificationefficiently This alignment method is based on the facialsymmetry plane which is determined using PCA and ICPBased on the normal of the symmetry plane the nose tip andnose bridge direction six degrees of freedom are fixed in a 3Dface to obtain a standard alignment posture

22 3D Face Recognition Algorithms Review of the 3D facerecognition methods from the perspective of their role indeveloping multiview and fusion based face recognitionalgorithms is presented as follows

The study [49] proposed to synthesize the various facialvariations by utilizing a morphable model that augments theexisting training set comprising a single frontal 2D imageof each subject The morphable model face is a vector spacerepresentation based parametric model of the faces In thevector space any convex combination of shape and texturevectors describes a human face For a single face image 3Dshape pose texture and illumination etc are automaticallyestimated by the algorithm The recognition task is realizedby measuring the Mahalanobis score between the fittingmodel and the shape and texture parameters of the modelscontained in the galleryThe authors performed identificationexperiments on two publicly available databases namelyFERET and CMU-PIE and achieved a recognition rate of959 and 95 respectively

The study [50] proposed a 3D face recognition algorithmwhere PCA based 3D face synthesis approach is employedto generate new faces based on a reference face model Theapproach preserves important 3D size information present inthe input face and achieves better alignment of facial pointsusing 3D scaling of the generic reference face The algorithm

4 Mathematical Problems in Engineering

uses ldquoone minus the cosine of the anglesrdquo among the PCAmodel parameters as the matching score The experimentswere performed using FRGC face database and 92-96verification rates at 0001 FAR and rank-1 identificationaccuracies between 94 and 95 were obtained

The study [51] proposed a fully automatic face recognitionsystem using multiview 25D facial images The approachemploys a feature extractor using the directional maximumto find the nose tip and pose angle simultaneously Faceimages are recognized using ICP based approach correspond-ing to best located nose tipThe experiments were performedon the MSU and the UND databases obtaining 962 and97 identifications rates respectively

The study [34] proposed Collective Shape DifferenceClassifier (CSDC) based approach using a summed confi-dence as a similarity measure The authors computed SignedShape Difference Map (SSDM) between two aligned 3D facesas a mediate depiction for comparison of facial shapes Theyused three types of features to encode the characteristics andlocal similarity between themThey constructed three strongclassifiers using most discriminative local facial features byboosting and training as weak classifiers The experimentswere carried out on FRGC v20 yielding verification ratesbetter than 979 at 0001 FAR and rank-1 recognition ratesabove 98

A study based on fusion of results acquired from severaloverlapping facial regions is proposed in the paper [15]employing decision level fusion (majority voting) PCA-LDAbased method was used for extraction of features whereaslikelihood ratio was used as matching criterion to classifythe individual regions The author conducted experimentsusing FRGC v20 3D database to evaluate efficacy of thealgorithm and reported 99 rank-1 recognition rate and946 verification rate at 01 FAR respectively

Another fusion based study is given in the paper [52]equipped with an approach where match scores of eachsubject were combined for both of 2D albedo and depthimages Experimental results are reported by employingPCA LDA and Nonnegative Matrix Factorization (NMF)based subspace methods and Elastic Bunch Graph Matching(EBGM) Among the experiments the best results werereported for sum rule based score level fusion The authorsachieved 89 recognition accuracy on a database of 261subjects

A recent region based study [27] proposed a method tohandle occlusions covering the facial surface employing twodatabases containing facial images with realistic occlusionsThe authors addressed two problems namely missing datahandling and occlusions and improved the classificationaccuracy at score level using product rule In the experiments100 classification results were obtained for neutral subsetswhereas in the same study pose expression and occlusionsubsets achieved relatively low classification accuracies

The study [53] proposed a facial recognition system (FRS)which employed fusion of three face classifiers using featureand match score level fusion methods The features used byclassifiers were extracted at facial contours around inner eyecorners and the nose tip The classification task was per-formed in LDA subspace by using Euclidean distance based

1NNclassifier Experimentswere performed on a coregistered2D-3D image database acquired from 116 subjects and rank-1recognition rate of 9909was obtained by the authors

A prominent algorithm based on fusion of 2D and 3Dfeatures is proposed in the study [54] which uses PCAemploying canonical correlation analysis (CCA) to learnmapping between a 2D image and its respective 3D scan Thealgorithm is capable of classifying a probe image (whether itis 2D or 3D) by matching it to a gallery image modeled byfusion of 2D and 3Dmodalities containing features frombothsides The authors performed experiments using a databaseof 115 subjects which contains neutral and expressive pairs of2D images and 3D scans They employed Euclidean distanceclassifier for the classification and obtained 55 classificationaccuracy using CCA algorithm alone Their results wereimproved to 85 by using CCA-PCA algorithm

The study [17] is a representativework of region based facerecognition methods The study proposed the use of facialrepresentation based on dual-tree complex wavelet transform(DT-CWT) and six subregions In this studyNNclassifierwasemployed in the classification stage and the authors achievedidentification rate of 986 for neutral faces on the FRGCv20database Similarly verification rate of 9953 at 01 FARwas obtained for neutral faces on the same database

A recent circular region based study [47] proposed aneffective 3D face keypoint detection andmatching frameworkusing three principle curvature measures The local shapeof the face around each 3D keypoint was comprehensivelydescribed by histograms of the principal curvature mea-sures Similarity comparison between facial surfaces wasestablished by matching local shape descriptors by sparserepresentation based reconstruction method and score levelfusion The evaluation of the algorithm was performed onGavabDB FRGC v20 and Bosphorus databases obtaining100 (neutral subset) 996 (neutral subset) and 986(pose subset) recognition rates respectively

The proposed study is focused on aligning the PFIemploying the PCF alignment algorithm It targets toenhance classification accuracies using complementary infor-mation obtained from d-MVAHF-based features acquiredfrom synthesized MVAHF images The results obtained fromour proposed methodology are better than the state-of-the-art studies [17 19 27 41ndash44] in terms of all the evaluationcriteria employed by these studies

3 Materials and Methods

The proposed system consists of face alignment identifi-cation and verification components implemented throughPCF alignment algorithm d-MVAHF and d-MVAHF-SVM-based methodologies respectively The following sectionsexplain the proposed algorithm in detail

31 The Proposed PCF Alignment Algorithm An illustrationof the PCF alignment algorithm is presented in Figure 2(a) Itemploys nose tip heuristic in the pose learning step and alignsthe PFI in xz yz and xy planes separately The procedureto determine the nose tip is described in the followingparagraphs

Mathematical Problems in Engineering 5

xy plane

xz plane

yz plane Pre-

proc

essin

g

Probe face image

(a)

MVWF MVRHFMVAHF

AHF RHF

MVLHF

LHF

FFRecognition

Softmax

Gal

lery

Imag

es

(b)

Probe scores computation

d-MVAHF feature vectors extracted

First face image Second face image

Training score

matrix T

G S

G S

I S

Training scores computation

G S

I S

Probe image

feature vector

Probe score matrix P

SVM

Recognition

I S

(c)

LDFIRotate

(0∘ to -30∘)

LOFIRotate

(0∘ to -30∘)

LUFIRotate

(0∘ to +30∘)

ROFIRotate

(0∘ to +30∘)

RPFRotate

(0∘ to -90∘)

LPFRotate

(0∘ to +90∘)

Rotate(-5∘ to +5∘)

Rotate(-5∘ to +5∘)

1

1

01

01

2

2

02

02

3

3

4

4 5 05

5 05

6

6

7

7 8

from 7 layer of AlexNet

from 7 layer31 31 33

0∘0∘

0∘0∘ 0∘

10∘10∘

10∘10∘

10∘

20∘20∘

20∘20∘

20∘

30∘30∘

30∘30∘

30∘

N11

N11

N11

N11

N1

N1

N1

N1

N1

N1

N1

N1

N01

N01

N01

N01

N0

N0

N0

N0

Rotate(-1∘ to +1∘)

Rotate(-1∘ to +1∘)

Rota

te(-

5∘to

+5∘

)

Rota

te(-

1∘to

+1∘

)

Figure 2 The proposed framework (a) PCF alignment algorithm (b) d-MVAHF-based face identification algorithm (c) d-MVAHF-SVM-based face verification algorithm

6 Mathematical Problems in Engineeringxmin

(a)

xmax

(b) (c) (d) (e) (f) (g) (h)

Figure 3 Examples of incorrectly detected nose tips on (a b) ears (c) lips area (d) z-axis noise (e) forehead hairs Nose templates (f) frontal(g) left (h) right

311 Nose Tip Detection Technique Nose tip detection is aspecific facial feature detection problem in depth imagesThe study [55] proposed a nose tip detection technique forFF images based on histogram initialization and trianglefitting and obtained a detection rate of 9943 on FRGC v20database In contrast to the study [55] the proposed studymarks the nose tip as the nearest captured point from 3Dscanner to the face and is used to localize align and crop thePFI Several problems were faced in detecting the nose tip asfollows

One of the problems was incorrect nose tip detection inLPF or RPF images where it was detected on ears or someother facial parts as shown on ear of RPF of subject Gav-abDB cara26 derecha and ear of LPF of subject GavabDBcara26 izquierda in Figures 3(a) and 3(b) respectively Inorder to handle this problem the PFI was first classified asFF LPF or RPFusing a convolutional neural network (CNN)and then nose tip was detected employing three differentstrategies for each of FF LPF or RPF The CNN was trainedfor a three-class problem for FF LPF or RPF classificationtask The PFI was used as input to the CNN which producedan N dimensional vector as the output where N is thenumber of classes The CNN architecture was comprised oftwo convolutional layers followed by batch normalizationand max pooling stages The CNN also included two fullyconnected layers at the end The first one contained 1024units while the second fully connected layer with three unitsperformed as the output layer with the softmax function Thearchitecture of the CNN for a PFI is shown in Figure 4 TheCNN classifies the PFI of size ℎ times 119908 as FF LPF or RPF usingthe final feature vector 119878 = 1198781199011 1198781199012 119878119901ℎ119901119908119901 computed forthe layer119901 Based on the classification of the PFI the nose tipis determined as follows

(1) For FF images the facial point at the minimumdistance from the 3D scanner along z-axis is markedas the nose tip

(2) For LPF the facial point having the minimum coor-dinate value along x-axis (xmin) is defined as the nosetip

(3) For RPF the facial point having the maximum coor-dinate value along x-axis (xmax) is marked as the nosetip

Another problem of the nose tip detection process wasincorrect detection of the nose tip in those subjects whichwere captured with leaning forward or backward faces In theleaning forward faces the nose tip was detected on foreheadwhereas in leaning backward faces it was detected on chin orlips area (See Figure 3(c) for subject FRGC v20 04233d510)Similarly noise scenarios played an adverse role in detectingthe nose tip For example in someof the face images the z-axisnoise occurring in the face acquisition process was markedas the nose tip as shown in Figure 3(d) for the subject FRGCv20 04217d461 Another such scenario was regarding femalesubjects where hairs on forehead or spread around their neckor ears were marked as the nose tip as shown in Figure 3(e)for the subject FRGC v20 04470d297

Such problems were handled by searching the nose tipin an approximate Region of Interest (ROI) The ROI on thealready classified FF LPF or RPF images was determined bymeasuring two features (i)maximumvalue of depthmap his-togram and (ii) maximum value of the correlation coefficientof Normalized Cross Correlation (NCC) The former featurewas measured using z -x and x depth map histograms foreach of the FF LPF or RPF in the respective order whereasthe latter was measured by correlating the correspondingfrontal left or right oriented nose templates (please see Fig-ures 3(f) 3(g) and 3(h) for subjectGavabDB cara26 frontal2izquierda and derecha respectively) with FF LPF or RPFimages The nose templates were randomly selected from tenrandomly chosen five male and female subjects each fromthe GavabDB database on satisfactory experimental resultsFor measuring the depth map histograms and correlationcoefficient values the PFI was rotated between 40∘ and -40∘with a step size of -40∘ around x-axis adjusting the y-axisorientation at 40∘ to -40∘with the same step size resulting intonine facial orientations The intuition behind this strategy isto search an upright position of the face because for such aposition maximum number of depth values accumulate intoa single bin of the depth map histogram and the correlationcoefficient of the NCC returns maximum value among allnine facial positions Consequently the nose tip was correctlydetected as the nearest captured point from 3D scanner to theface using an approximate ROI

The proposed algorithm correctly detected the nose tipsof face images from GavabDB Bosphorus UMB-DB andFRGC v20 databases including all those cases where the nose

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 2: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

2 Mathematical Problems in Engineering

(a) (b) (c) (d) (e)

Figure 1 Subject Bosphorus bs000 (a) LPF (b) RPF (c) FF (d) face showing facial symmetry in (e) LHF and RHF

make a wide array of new kinds of application possibleeg 3D biometrics 3D medical imaging 3D remote sensingvirtual reality augmented reality and 3D humanndashmachineinteraction [13] As a special application of 3D shape analysisand processing the key issue of 3D face recognition has alsobeen widely addressed and identified to bemuchmore robustto varying poses and illumination changes [14] Thereforepose invariant face recognition using 3D models is provingto be promising especially in case of in-depth pose variationsalong x- y- and z-axis under unconstrained acquisitionscenarios of real world [15] Encouraged by these lines ofevidence many 3D face recognition approaches have beenevolved and experimented in the last few years as given in thework of Bowyer et al [16] and the literature reviews [17ndash20]

The existing 3D face recognition approaches can begrouped into holistic local feature-based and hybriddomains [20] Under the holistic paradigm PrincipalComponent Analysis (PCA) [21 22] Linear DiscriminantAnalysis (LDA) [21 23] and Independent ComponentAnalysis (ICA) [24] are based on subspace learningLocal feature-based methods employ features from localdescriptive points curves and regions In point basedmethods a curvelet transform based study is proposed in theresearch work [18] to find salient points on the face images forbuilding multiscale local surfaces Similarly facial key pointsare detected exploiting meshSIFT algorithm in the pointbased methods [19 25] A prominent curve based methodemploying Riemannian framework is presented in the paper[26] whereas the study [27] is a representative regionbased approach for occlusions and missing data handlingproblem Hybrid approaches employ both of holistic andlocal feature-based methods [28] or combination of 2D and3D images in the face recognition process [14]

The most crucial stage in any 3D face recognition algo-rithm is face alignment and the resulting accuracy primarilydepends on the robustness of the alignment module [29] Inalignment phase facial features are transformed such thatthey can be reliably matched A few 3D alignment techniques[29ndash34] existing in literature are based on Iterative ClosestPoint (ICP) [30] Intrinsic Coordinate System (ICS) [31]Simulated Annealing (SA) [32] and Average Face Model(AFM) [29]

Solutions enabling recognition of subjects captured underarbitrary poses are now attracting an increasing interestProfile image based face recognition [35] is an example of

such a case where left or right profile face images are usedin the recognition process Left profile face (LPF) images aredefined as the face images where a subject presents hisherface rotated -90∘ (please see Figure 1(a)) whereas in rightprofile face (RPF) images the subjectrsquos face is rotated at+90∘ around the vertical axis in xz plane [36] (Figure 1(b))Please note that frontal face (FF) images are captured withsubjects facing towards camera at different angles as shownin Figure 1(c) The face alignment and recognition of LPFRPF and FF is a challenging problem in 2D intensity imagesIn the proposed study our aim is to present a novel 3D facealignment and recognition algorithm to deal with FF LPFand RPF images For face identification Multi-View WholeFace (MVWF) images are synthesized to integrate real 3Dfacial feature information that boosts the face recognitionaccuracy Motivated by the intrinsic symmetry of a face[37] (Figure 1(d)) exhibited by the LHF and RHF images(Figure 1(e)) Multi-View LHF (MVLHF) images and Multi-View RHF (MVRHF) images are combined into Multi-View Average Half Face (MVAHF) images Subsequently d-MVAHF features are extracted and employed in the facerecognition process using dCNN For a comparative evalu-ation experimental results are also reported for d-MVWF d-MVLHF and d-MVRHF features on all four databases

For performance evaluation of the proposed approachfour benchmark databases namely GavabDB [36] Bospho-rus [38] UMB-DB [39] and FRGC v20 [40] have beenused in this study These databases carry pose and expressionvariations and are commonly utilized for developing 3Dface recognition algorithms For example the algorithmspresented in state-of-the-art studies [17 41ndash43] employedFRGC v20 database while those presented in Refs [19 2628 44ndash46] are based on both of FRGC v20 and GavabDBdatabases Similarly the studies [26 28 47] employed Gav-abDB Bosphorus and FRGC v20 databases while the study[27] used UMB-DB to evaluate the algorithm The maincontributions and novelty of the proposed algorithm are asfollows

(1) The first contribution of this study is a novel 3Dalignment algorithm that can deal with neutral andexpressive FF LPF and RPF images in face recog-nition applications The proposed algorithm differsfrom the conventional alignment approaches in twoaspects (i) It does not align two face images to each

Mathematical Problems in Engineering 3

other rather it is capable of aligning a standaloneprobe face image (PFI) and (ii) it employs a nosetip heuristic based pose learning approach The poselearning approach first estimates acquisition poseof the PFI Subsequently an L2 norm minimizationbased coarse to fine alignment approach is employedthat initially aligns the nose tip of the PFI This isfollowed by a transformation step to align the wholefacial surface in a single 3D rotation The proposedalgorithm is referred to as pose learning-based coarseto fine (PCF) alignment algorithm in the rest of thestudy

(2) The second contribution is a novel deeply learnedapproach for image analysis with applications in3D face recognition The proposed d-MVAHF-basedface identification and d-MVAHF-SVM-based faceverification approach employs face images oriented at0∘ 10∘ 20∘ and 30∘ For a comparative evaluation theproposed approach is tested using d-MVWF imagesd-MVLHF images and d-MVRHF images oriented at0∘plusmn10∘plusmn20∘ and plusmn30∘ 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ respectivelyTheproposed algorithmis also validated using deeply learned multiview LPF(d-MVLPF) and deeply learned multiview RPF (d-MVRPF) images oriented at 0∘ -10∘ -20∘ and -30∘and 0∘ 10∘ 20∘ and 30∘ respectively

(3) Third contribution is (i) the study of the role ofpose learning and nose tip alignment in reducing thecomputational complexity of the PCF alignment algo-rithm for face recognition applications and (ii) thecomputational complexity analysis of the d-MVAHF-based face recognition compared to d-MVWF basedface recognition for biometric applications

Rest of the study is organized as follows Related workis presented in Section 2 Section 3 deals with the detailsof proposed 3D alignment and face recognition algorithmExperiments and results are given in Section 4 whereas thediscussion and conclusions are presented in Sections 5 and 6respectively

2 Related Work

For a thorough survey of research in the area of 3D facerecognition and applications the reader is referred to thestudies [16 48] The related work in the context of bothcomponents of the current study ie 3D face alignment and3D face recognition is discussed separately

21 3D Face Alignment Algorithms Review of the existing3D face alignment algorithms namely ICP [30] ICS [31] SA[32] and AFM [29] is given as follows

ICP [30 34] based algorithm aligns two 3D faces byminimizing the distance between them iteratively Limi-tations of the ICP include initial course alignment andslow convergence The drawbacks of ICP technique limitits applicability to only verification setup where the PFI isto be aligned to the claimed identity image only [15] They

become an issue in identification setup where a probe isto be aligned to the whole gallery The second methodalignment to an ICS [31] mainly involves localization oflandmarks on 3D facial images comparison of landmarksto corresponding points on ICS and a transformation phaseto finish alignment The downside of this method includeslow accuracy in localization of landmarks especially for theface images with pose and expression variations A singlealignment event required for a probe to align it to ICSmakes this technique appropriate for identification as wellas verification scenarios [15] SA [32] algorithm employs astochastic technique using a local search based approachand its drawback is excessive time consumption [33] Similarto ICP this method is suitable for verification setup onlyIn AFM [29] based alignment the AFM is constructed bydemarcating and averaging landmarks on the facial imagesand the probe image is aligned to AFM only once Thisaspect empowers this method to be used as an alignmenttechnique in both of face identification and verification setups[15] A significant disadvantage of the AFM based methodis probe imagersquos less accurate alignment to an AFM due toloss of spatial information involved in averaging process [15]Another fast and effective face alignment method is proposedin the work of Wang et al [34] to place each face modelto a standard position and orientation It does not align aprobe image to every image in the gallery therefore it canbe employed for both of face verification and identificationefficiently This alignment method is based on the facialsymmetry plane which is determined using PCA and ICPBased on the normal of the symmetry plane the nose tip andnose bridge direction six degrees of freedom are fixed in a 3Dface to obtain a standard alignment posture

22 3D Face Recognition Algorithms Review of the 3D facerecognition methods from the perspective of their role indeveloping multiview and fusion based face recognitionalgorithms is presented as follows

The study [49] proposed to synthesize the various facialvariations by utilizing a morphable model that augments theexisting training set comprising a single frontal 2D imageof each subject The morphable model face is a vector spacerepresentation based parametric model of the faces In thevector space any convex combination of shape and texturevectors describes a human face For a single face image 3Dshape pose texture and illumination etc are automaticallyestimated by the algorithm The recognition task is realizedby measuring the Mahalanobis score between the fittingmodel and the shape and texture parameters of the modelscontained in the galleryThe authors performed identificationexperiments on two publicly available databases namelyFERET and CMU-PIE and achieved a recognition rate of959 and 95 respectively

The study [50] proposed a 3D face recognition algorithmwhere PCA based 3D face synthesis approach is employedto generate new faces based on a reference face model Theapproach preserves important 3D size information present inthe input face and achieves better alignment of facial pointsusing 3D scaling of the generic reference face The algorithm

4 Mathematical Problems in Engineering

uses ldquoone minus the cosine of the anglesrdquo among the PCAmodel parameters as the matching score The experimentswere performed using FRGC face database and 92-96verification rates at 0001 FAR and rank-1 identificationaccuracies between 94 and 95 were obtained

The study [51] proposed a fully automatic face recognitionsystem using multiview 25D facial images The approachemploys a feature extractor using the directional maximumto find the nose tip and pose angle simultaneously Faceimages are recognized using ICP based approach correspond-ing to best located nose tipThe experiments were performedon the MSU and the UND databases obtaining 962 and97 identifications rates respectively

The study [34] proposed Collective Shape DifferenceClassifier (CSDC) based approach using a summed confi-dence as a similarity measure The authors computed SignedShape Difference Map (SSDM) between two aligned 3D facesas a mediate depiction for comparison of facial shapes Theyused three types of features to encode the characteristics andlocal similarity between themThey constructed three strongclassifiers using most discriminative local facial features byboosting and training as weak classifiers The experimentswere carried out on FRGC v20 yielding verification ratesbetter than 979 at 0001 FAR and rank-1 recognition ratesabove 98

A study based on fusion of results acquired from severaloverlapping facial regions is proposed in the paper [15]employing decision level fusion (majority voting) PCA-LDAbased method was used for extraction of features whereaslikelihood ratio was used as matching criterion to classifythe individual regions The author conducted experimentsusing FRGC v20 3D database to evaluate efficacy of thealgorithm and reported 99 rank-1 recognition rate and946 verification rate at 01 FAR respectively

Another fusion based study is given in the paper [52]equipped with an approach where match scores of eachsubject were combined for both of 2D albedo and depthimages Experimental results are reported by employingPCA LDA and Nonnegative Matrix Factorization (NMF)based subspace methods and Elastic Bunch Graph Matching(EBGM) Among the experiments the best results werereported for sum rule based score level fusion The authorsachieved 89 recognition accuracy on a database of 261subjects

A recent region based study [27] proposed a method tohandle occlusions covering the facial surface employing twodatabases containing facial images with realistic occlusionsThe authors addressed two problems namely missing datahandling and occlusions and improved the classificationaccuracy at score level using product rule In the experiments100 classification results were obtained for neutral subsetswhereas in the same study pose expression and occlusionsubsets achieved relatively low classification accuracies

The study [53] proposed a facial recognition system (FRS)which employed fusion of three face classifiers using featureand match score level fusion methods The features used byclassifiers were extracted at facial contours around inner eyecorners and the nose tip The classification task was per-formed in LDA subspace by using Euclidean distance based

1NNclassifier Experimentswere performed on a coregistered2D-3D image database acquired from 116 subjects and rank-1recognition rate of 9909was obtained by the authors

A prominent algorithm based on fusion of 2D and 3Dfeatures is proposed in the study [54] which uses PCAemploying canonical correlation analysis (CCA) to learnmapping between a 2D image and its respective 3D scan Thealgorithm is capable of classifying a probe image (whether itis 2D or 3D) by matching it to a gallery image modeled byfusion of 2D and 3Dmodalities containing features frombothsides The authors performed experiments using a databaseof 115 subjects which contains neutral and expressive pairs of2D images and 3D scans They employed Euclidean distanceclassifier for the classification and obtained 55 classificationaccuracy using CCA algorithm alone Their results wereimproved to 85 by using CCA-PCA algorithm

The study [17] is a representativework of region based facerecognition methods The study proposed the use of facialrepresentation based on dual-tree complex wavelet transform(DT-CWT) and six subregions In this studyNNclassifierwasemployed in the classification stage and the authors achievedidentification rate of 986 for neutral faces on the FRGCv20database Similarly verification rate of 9953 at 01 FARwas obtained for neutral faces on the same database

A recent circular region based study [47] proposed aneffective 3D face keypoint detection andmatching frameworkusing three principle curvature measures The local shapeof the face around each 3D keypoint was comprehensivelydescribed by histograms of the principal curvature mea-sures Similarity comparison between facial surfaces wasestablished by matching local shape descriptors by sparserepresentation based reconstruction method and score levelfusion The evaluation of the algorithm was performed onGavabDB FRGC v20 and Bosphorus databases obtaining100 (neutral subset) 996 (neutral subset) and 986(pose subset) recognition rates respectively

The proposed study is focused on aligning the PFIemploying the PCF alignment algorithm It targets toenhance classification accuracies using complementary infor-mation obtained from d-MVAHF-based features acquiredfrom synthesized MVAHF images The results obtained fromour proposed methodology are better than the state-of-the-art studies [17 19 27 41ndash44] in terms of all the evaluationcriteria employed by these studies

3 Materials and Methods

The proposed system consists of face alignment identifi-cation and verification components implemented throughPCF alignment algorithm d-MVAHF and d-MVAHF-SVM-based methodologies respectively The following sectionsexplain the proposed algorithm in detail

31 The Proposed PCF Alignment Algorithm An illustrationof the PCF alignment algorithm is presented in Figure 2(a) Itemploys nose tip heuristic in the pose learning step and alignsthe PFI in xz yz and xy planes separately The procedureto determine the nose tip is described in the followingparagraphs

Mathematical Problems in Engineering 5

xy plane

xz plane

yz plane Pre-

proc

essin

g

Probe face image

(a)

MVWF MVRHFMVAHF

AHF RHF

MVLHF

LHF

FFRecognition

Softmax

Gal

lery

Imag

es

(b)

Probe scores computation

d-MVAHF feature vectors extracted

First face image Second face image

Training score

matrix T

G S

G S

I S

Training scores computation

G S

I S

Probe image

feature vector

Probe score matrix P

SVM

Recognition

I S

(c)

LDFIRotate

(0∘ to -30∘)

LOFIRotate

(0∘ to -30∘)

LUFIRotate

(0∘ to +30∘)

ROFIRotate

(0∘ to +30∘)

RPFRotate

(0∘ to -90∘)

LPFRotate

(0∘ to +90∘)

Rotate(-5∘ to +5∘)

Rotate(-5∘ to +5∘)

1

1

01

01

2

2

02

02

3

3

4

4 5 05

5 05

6

6

7

7 8

from 7 layer of AlexNet

from 7 layer31 31 33

0∘0∘

0∘0∘ 0∘

10∘10∘

10∘10∘

10∘

20∘20∘

20∘20∘

20∘

30∘30∘

30∘30∘

30∘

N11

N11

N11

N11

N1

N1

N1

N1

N1

N1

N1

N1

N01

N01

N01

N01

N0

N0

N0

N0

Rotate(-1∘ to +1∘)

Rotate(-1∘ to +1∘)

Rota

te(-

5∘to

+5∘

)

Rota

te(-

1∘to

+1∘

)

Figure 2 The proposed framework (a) PCF alignment algorithm (b) d-MVAHF-based face identification algorithm (c) d-MVAHF-SVM-based face verification algorithm

6 Mathematical Problems in Engineeringxmin

(a)

xmax

(b) (c) (d) (e) (f) (g) (h)

Figure 3 Examples of incorrectly detected nose tips on (a b) ears (c) lips area (d) z-axis noise (e) forehead hairs Nose templates (f) frontal(g) left (h) right

311 Nose Tip Detection Technique Nose tip detection is aspecific facial feature detection problem in depth imagesThe study [55] proposed a nose tip detection technique forFF images based on histogram initialization and trianglefitting and obtained a detection rate of 9943 on FRGC v20database In contrast to the study [55] the proposed studymarks the nose tip as the nearest captured point from 3Dscanner to the face and is used to localize align and crop thePFI Several problems were faced in detecting the nose tip asfollows

One of the problems was incorrect nose tip detection inLPF or RPF images where it was detected on ears or someother facial parts as shown on ear of RPF of subject Gav-abDB cara26 derecha and ear of LPF of subject GavabDBcara26 izquierda in Figures 3(a) and 3(b) respectively Inorder to handle this problem the PFI was first classified asFF LPF or RPFusing a convolutional neural network (CNN)and then nose tip was detected employing three differentstrategies for each of FF LPF or RPF The CNN was trainedfor a three-class problem for FF LPF or RPF classificationtask The PFI was used as input to the CNN which producedan N dimensional vector as the output where N is thenumber of classes The CNN architecture was comprised oftwo convolutional layers followed by batch normalizationand max pooling stages The CNN also included two fullyconnected layers at the end The first one contained 1024units while the second fully connected layer with three unitsperformed as the output layer with the softmax function Thearchitecture of the CNN for a PFI is shown in Figure 4 TheCNN classifies the PFI of size ℎ times 119908 as FF LPF or RPF usingthe final feature vector 119878 = 1198781199011 1198781199012 119878119901ℎ119901119908119901 computed forthe layer119901 Based on the classification of the PFI the nose tipis determined as follows

(1) For FF images the facial point at the minimumdistance from the 3D scanner along z-axis is markedas the nose tip

(2) For LPF the facial point having the minimum coor-dinate value along x-axis (xmin) is defined as the nosetip

(3) For RPF the facial point having the maximum coor-dinate value along x-axis (xmax) is marked as the nosetip

Another problem of the nose tip detection process wasincorrect detection of the nose tip in those subjects whichwere captured with leaning forward or backward faces In theleaning forward faces the nose tip was detected on foreheadwhereas in leaning backward faces it was detected on chin orlips area (See Figure 3(c) for subject FRGC v20 04233d510)Similarly noise scenarios played an adverse role in detectingthe nose tip For example in someof the face images the z-axisnoise occurring in the face acquisition process was markedas the nose tip as shown in Figure 3(d) for the subject FRGCv20 04217d461 Another such scenario was regarding femalesubjects where hairs on forehead or spread around their neckor ears were marked as the nose tip as shown in Figure 3(e)for the subject FRGC v20 04470d297

Such problems were handled by searching the nose tipin an approximate Region of Interest (ROI) The ROI on thealready classified FF LPF or RPF images was determined bymeasuring two features (i)maximumvalue of depthmap his-togram and (ii) maximum value of the correlation coefficientof Normalized Cross Correlation (NCC) The former featurewas measured using z -x and x depth map histograms foreach of the FF LPF or RPF in the respective order whereasthe latter was measured by correlating the correspondingfrontal left or right oriented nose templates (please see Fig-ures 3(f) 3(g) and 3(h) for subjectGavabDB cara26 frontal2izquierda and derecha respectively) with FF LPF or RPFimages The nose templates were randomly selected from tenrandomly chosen five male and female subjects each fromthe GavabDB database on satisfactory experimental resultsFor measuring the depth map histograms and correlationcoefficient values the PFI was rotated between 40∘ and -40∘with a step size of -40∘ around x-axis adjusting the y-axisorientation at 40∘ to -40∘with the same step size resulting intonine facial orientations The intuition behind this strategy isto search an upright position of the face because for such aposition maximum number of depth values accumulate intoa single bin of the depth map histogram and the correlationcoefficient of the NCC returns maximum value among allnine facial positions Consequently the nose tip was correctlydetected as the nearest captured point from 3D scanner to theface using an approximate ROI

The proposed algorithm correctly detected the nose tipsof face images from GavabDB Bosphorus UMB-DB andFRGC v20 databases including all those cases where the nose

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 3: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 3

other rather it is capable of aligning a standaloneprobe face image (PFI) and (ii) it employs a nosetip heuristic based pose learning approach The poselearning approach first estimates acquisition poseof the PFI Subsequently an L2 norm minimizationbased coarse to fine alignment approach is employedthat initially aligns the nose tip of the PFI This isfollowed by a transformation step to align the wholefacial surface in a single 3D rotation The proposedalgorithm is referred to as pose learning-based coarseto fine (PCF) alignment algorithm in the rest of thestudy

(2) The second contribution is a novel deeply learnedapproach for image analysis with applications in3D face recognition The proposed d-MVAHF-basedface identification and d-MVAHF-SVM-based faceverification approach employs face images oriented at0∘ 10∘ 20∘ and 30∘ For a comparative evaluation theproposed approach is tested using d-MVWF imagesd-MVLHF images and d-MVRHF images oriented at0∘plusmn10∘plusmn20∘ and plusmn30∘ 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ respectivelyTheproposed algorithmis also validated using deeply learned multiview LPF(d-MVLPF) and deeply learned multiview RPF (d-MVRPF) images oriented at 0∘ -10∘ -20∘ and -30∘and 0∘ 10∘ 20∘ and 30∘ respectively

(3) Third contribution is (i) the study of the role ofpose learning and nose tip alignment in reducing thecomputational complexity of the PCF alignment algo-rithm for face recognition applications and (ii) thecomputational complexity analysis of the d-MVAHF-based face recognition compared to d-MVWF basedface recognition for biometric applications

Rest of the study is organized as follows Related workis presented in Section 2 Section 3 deals with the detailsof proposed 3D alignment and face recognition algorithmExperiments and results are given in Section 4 whereas thediscussion and conclusions are presented in Sections 5 and 6respectively

2 Related Work

For a thorough survey of research in the area of 3D facerecognition and applications the reader is referred to thestudies [16 48] The related work in the context of bothcomponents of the current study ie 3D face alignment and3D face recognition is discussed separately

21 3D Face Alignment Algorithms Review of the existing3D face alignment algorithms namely ICP [30] ICS [31] SA[32] and AFM [29] is given as follows

ICP [30 34] based algorithm aligns two 3D faces byminimizing the distance between them iteratively Limi-tations of the ICP include initial course alignment andslow convergence The drawbacks of ICP technique limitits applicability to only verification setup where the PFI isto be aligned to the claimed identity image only [15] They

become an issue in identification setup where a probe isto be aligned to the whole gallery The second methodalignment to an ICS [31] mainly involves localization oflandmarks on 3D facial images comparison of landmarksto corresponding points on ICS and a transformation phaseto finish alignment The downside of this method includeslow accuracy in localization of landmarks especially for theface images with pose and expression variations A singlealignment event required for a probe to align it to ICSmakes this technique appropriate for identification as wellas verification scenarios [15] SA [32] algorithm employs astochastic technique using a local search based approachand its drawback is excessive time consumption [33] Similarto ICP this method is suitable for verification setup onlyIn AFM [29] based alignment the AFM is constructed bydemarcating and averaging landmarks on the facial imagesand the probe image is aligned to AFM only once Thisaspect empowers this method to be used as an alignmenttechnique in both of face identification and verification setups[15] A significant disadvantage of the AFM based methodis probe imagersquos less accurate alignment to an AFM due toloss of spatial information involved in averaging process [15]Another fast and effective face alignment method is proposedin the work of Wang et al [34] to place each face modelto a standard position and orientation It does not align aprobe image to every image in the gallery therefore it canbe employed for both of face verification and identificationefficiently This alignment method is based on the facialsymmetry plane which is determined using PCA and ICPBased on the normal of the symmetry plane the nose tip andnose bridge direction six degrees of freedom are fixed in a 3Dface to obtain a standard alignment posture

22 3D Face Recognition Algorithms Review of the 3D facerecognition methods from the perspective of their role indeveloping multiview and fusion based face recognitionalgorithms is presented as follows

The study [49] proposed to synthesize the various facialvariations by utilizing a morphable model that augments theexisting training set comprising a single frontal 2D imageof each subject The morphable model face is a vector spacerepresentation based parametric model of the faces In thevector space any convex combination of shape and texturevectors describes a human face For a single face image 3Dshape pose texture and illumination etc are automaticallyestimated by the algorithm The recognition task is realizedby measuring the Mahalanobis score between the fittingmodel and the shape and texture parameters of the modelscontained in the galleryThe authors performed identificationexperiments on two publicly available databases namelyFERET and CMU-PIE and achieved a recognition rate of959 and 95 respectively

The study [50] proposed a 3D face recognition algorithmwhere PCA based 3D face synthesis approach is employedto generate new faces based on a reference face model Theapproach preserves important 3D size information present inthe input face and achieves better alignment of facial pointsusing 3D scaling of the generic reference face The algorithm

4 Mathematical Problems in Engineering

uses ldquoone minus the cosine of the anglesrdquo among the PCAmodel parameters as the matching score The experimentswere performed using FRGC face database and 92-96verification rates at 0001 FAR and rank-1 identificationaccuracies between 94 and 95 were obtained

The study [51] proposed a fully automatic face recognitionsystem using multiview 25D facial images The approachemploys a feature extractor using the directional maximumto find the nose tip and pose angle simultaneously Faceimages are recognized using ICP based approach correspond-ing to best located nose tipThe experiments were performedon the MSU and the UND databases obtaining 962 and97 identifications rates respectively

The study [34] proposed Collective Shape DifferenceClassifier (CSDC) based approach using a summed confi-dence as a similarity measure The authors computed SignedShape Difference Map (SSDM) between two aligned 3D facesas a mediate depiction for comparison of facial shapes Theyused three types of features to encode the characteristics andlocal similarity between themThey constructed three strongclassifiers using most discriminative local facial features byboosting and training as weak classifiers The experimentswere carried out on FRGC v20 yielding verification ratesbetter than 979 at 0001 FAR and rank-1 recognition ratesabove 98

A study based on fusion of results acquired from severaloverlapping facial regions is proposed in the paper [15]employing decision level fusion (majority voting) PCA-LDAbased method was used for extraction of features whereaslikelihood ratio was used as matching criterion to classifythe individual regions The author conducted experimentsusing FRGC v20 3D database to evaluate efficacy of thealgorithm and reported 99 rank-1 recognition rate and946 verification rate at 01 FAR respectively

Another fusion based study is given in the paper [52]equipped with an approach where match scores of eachsubject were combined for both of 2D albedo and depthimages Experimental results are reported by employingPCA LDA and Nonnegative Matrix Factorization (NMF)based subspace methods and Elastic Bunch Graph Matching(EBGM) Among the experiments the best results werereported for sum rule based score level fusion The authorsachieved 89 recognition accuracy on a database of 261subjects

A recent region based study [27] proposed a method tohandle occlusions covering the facial surface employing twodatabases containing facial images with realistic occlusionsThe authors addressed two problems namely missing datahandling and occlusions and improved the classificationaccuracy at score level using product rule In the experiments100 classification results were obtained for neutral subsetswhereas in the same study pose expression and occlusionsubsets achieved relatively low classification accuracies

The study [53] proposed a facial recognition system (FRS)which employed fusion of three face classifiers using featureand match score level fusion methods The features used byclassifiers were extracted at facial contours around inner eyecorners and the nose tip The classification task was per-formed in LDA subspace by using Euclidean distance based

1NNclassifier Experimentswere performed on a coregistered2D-3D image database acquired from 116 subjects and rank-1recognition rate of 9909was obtained by the authors

A prominent algorithm based on fusion of 2D and 3Dfeatures is proposed in the study [54] which uses PCAemploying canonical correlation analysis (CCA) to learnmapping between a 2D image and its respective 3D scan Thealgorithm is capable of classifying a probe image (whether itis 2D or 3D) by matching it to a gallery image modeled byfusion of 2D and 3Dmodalities containing features frombothsides The authors performed experiments using a databaseof 115 subjects which contains neutral and expressive pairs of2D images and 3D scans They employed Euclidean distanceclassifier for the classification and obtained 55 classificationaccuracy using CCA algorithm alone Their results wereimproved to 85 by using CCA-PCA algorithm

The study [17] is a representativework of region based facerecognition methods The study proposed the use of facialrepresentation based on dual-tree complex wavelet transform(DT-CWT) and six subregions In this studyNNclassifierwasemployed in the classification stage and the authors achievedidentification rate of 986 for neutral faces on the FRGCv20database Similarly verification rate of 9953 at 01 FARwas obtained for neutral faces on the same database

A recent circular region based study [47] proposed aneffective 3D face keypoint detection andmatching frameworkusing three principle curvature measures The local shapeof the face around each 3D keypoint was comprehensivelydescribed by histograms of the principal curvature mea-sures Similarity comparison between facial surfaces wasestablished by matching local shape descriptors by sparserepresentation based reconstruction method and score levelfusion The evaluation of the algorithm was performed onGavabDB FRGC v20 and Bosphorus databases obtaining100 (neutral subset) 996 (neutral subset) and 986(pose subset) recognition rates respectively

The proposed study is focused on aligning the PFIemploying the PCF alignment algorithm It targets toenhance classification accuracies using complementary infor-mation obtained from d-MVAHF-based features acquiredfrom synthesized MVAHF images The results obtained fromour proposed methodology are better than the state-of-the-art studies [17 19 27 41ndash44] in terms of all the evaluationcriteria employed by these studies

3 Materials and Methods

The proposed system consists of face alignment identifi-cation and verification components implemented throughPCF alignment algorithm d-MVAHF and d-MVAHF-SVM-based methodologies respectively The following sectionsexplain the proposed algorithm in detail

31 The Proposed PCF Alignment Algorithm An illustrationof the PCF alignment algorithm is presented in Figure 2(a) Itemploys nose tip heuristic in the pose learning step and alignsthe PFI in xz yz and xy planes separately The procedureto determine the nose tip is described in the followingparagraphs

Mathematical Problems in Engineering 5

xy plane

xz plane

yz plane Pre-

proc

essin

g

Probe face image

(a)

MVWF MVRHFMVAHF

AHF RHF

MVLHF

LHF

FFRecognition

Softmax

Gal

lery

Imag

es

(b)

Probe scores computation

d-MVAHF feature vectors extracted

First face image Second face image

Training score

matrix T

G S

G S

I S

Training scores computation

G S

I S

Probe image

feature vector

Probe score matrix P

SVM

Recognition

I S

(c)

LDFIRotate

(0∘ to -30∘)

LOFIRotate

(0∘ to -30∘)

LUFIRotate

(0∘ to +30∘)

ROFIRotate

(0∘ to +30∘)

RPFRotate

(0∘ to -90∘)

LPFRotate

(0∘ to +90∘)

Rotate(-5∘ to +5∘)

Rotate(-5∘ to +5∘)

1

1

01

01

2

2

02

02

3

3

4

4 5 05

5 05

6

6

7

7 8

from 7 layer of AlexNet

from 7 layer31 31 33

0∘0∘

0∘0∘ 0∘

10∘10∘

10∘10∘

10∘

20∘20∘

20∘20∘

20∘

30∘30∘

30∘30∘

30∘

N11

N11

N11

N11

N1

N1

N1

N1

N1

N1

N1

N1

N01

N01

N01

N01

N0

N0

N0

N0

Rotate(-1∘ to +1∘)

Rotate(-1∘ to +1∘)

Rota

te(-

5∘to

+5∘

)

Rota

te(-

1∘to

+1∘

)

Figure 2 The proposed framework (a) PCF alignment algorithm (b) d-MVAHF-based face identification algorithm (c) d-MVAHF-SVM-based face verification algorithm

6 Mathematical Problems in Engineeringxmin

(a)

xmax

(b) (c) (d) (e) (f) (g) (h)

Figure 3 Examples of incorrectly detected nose tips on (a b) ears (c) lips area (d) z-axis noise (e) forehead hairs Nose templates (f) frontal(g) left (h) right

311 Nose Tip Detection Technique Nose tip detection is aspecific facial feature detection problem in depth imagesThe study [55] proposed a nose tip detection technique forFF images based on histogram initialization and trianglefitting and obtained a detection rate of 9943 on FRGC v20database In contrast to the study [55] the proposed studymarks the nose tip as the nearest captured point from 3Dscanner to the face and is used to localize align and crop thePFI Several problems were faced in detecting the nose tip asfollows

One of the problems was incorrect nose tip detection inLPF or RPF images where it was detected on ears or someother facial parts as shown on ear of RPF of subject Gav-abDB cara26 derecha and ear of LPF of subject GavabDBcara26 izquierda in Figures 3(a) and 3(b) respectively Inorder to handle this problem the PFI was first classified asFF LPF or RPFusing a convolutional neural network (CNN)and then nose tip was detected employing three differentstrategies for each of FF LPF or RPF The CNN was trainedfor a three-class problem for FF LPF or RPF classificationtask The PFI was used as input to the CNN which producedan N dimensional vector as the output where N is thenumber of classes The CNN architecture was comprised oftwo convolutional layers followed by batch normalizationand max pooling stages The CNN also included two fullyconnected layers at the end The first one contained 1024units while the second fully connected layer with three unitsperformed as the output layer with the softmax function Thearchitecture of the CNN for a PFI is shown in Figure 4 TheCNN classifies the PFI of size ℎ times 119908 as FF LPF or RPF usingthe final feature vector 119878 = 1198781199011 1198781199012 119878119901ℎ119901119908119901 computed forthe layer119901 Based on the classification of the PFI the nose tipis determined as follows

(1) For FF images the facial point at the minimumdistance from the 3D scanner along z-axis is markedas the nose tip

(2) For LPF the facial point having the minimum coor-dinate value along x-axis (xmin) is defined as the nosetip

(3) For RPF the facial point having the maximum coor-dinate value along x-axis (xmax) is marked as the nosetip

Another problem of the nose tip detection process wasincorrect detection of the nose tip in those subjects whichwere captured with leaning forward or backward faces In theleaning forward faces the nose tip was detected on foreheadwhereas in leaning backward faces it was detected on chin orlips area (See Figure 3(c) for subject FRGC v20 04233d510)Similarly noise scenarios played an adverse role in detectingthe nose tip For example in someof the face images the z-axisnoise occurring in the face acquisition process was markedas the nose tip as shown in Figure 3(d) for the subject FRGCv20 04217d461 Another such scenario was regarding femalesubjects where hairs on forehead or spread around their neckor ears were marked as the nose tip as shown in Figure 3(e)for the subject FRGC v20 04470d297

Such problems were handled by searching the nose tipin an approximate Region of Interest (ROI) The ROI on thealready classified FF LPF or RPF images was determined bymeasuring two features (i)maximumvalue of depthmap his-togram and (ii) maximum value of the correlation coefficientof Normalized Cross Correlation (NCC) The former featurewas measured using z -x and x depth map histograms foreach of the FF LPF or RPF in the respective order whereasthe latter was measured by correlating the correspondingfrontal left or right oriented nose templates (please see Fig-ures 3(f) 3(g) and 3(h) for subjectGavabDB cara26 frontal2izquierda and derecha respectively) with FF LPF or RPFimages The nose templates were randomly selected from tenrandomly chosen five male and female subjects each fromthe GavabDB database on satisfactory experimental resultsFor measuring the depth map histograms and correlationcoefficient values the PFI was rotated between 40∘ and -40∘with a step size of -40∘ around x-axis adjusting the y-axisorientation at 40∘ to -40∘with the same step size resulting intonine facial orientations The intuition behind this strategy isto search an upright position of the face because for such aposition maximum number of depth values accumulate intoa single bin of the depth map histogram and the correlationcoefficient of the NCC returns maximum value among allnine facial positions Consequently the nose tip was correctlydetected as the nearest captured point from 3D scanner to theface using an approximate ROI

The proposed algorithm correctly detected the nose tipsof face images from GavabDB Bosphorus UMB-DB andFRGC v20 databases including all those cases where the nose

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 4: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

4 Mathematical Problems in Engineering

uses ldquoone minus the cosine of the anglesrdquo among the PCAmodel parameters as the matching score The experimentswere performed using FRGC face database and 92-96verification rates at 0001 FAR and rank-1 identificationaccuracies between 94 and 95 were obtained

The study [51] proposed a fully automatic face recognitionsystem using multiview 25D facial images The approachemploys a feature extractor using the directional maximumto find the nose tip and pose angle simultaneously Faceimages are recognized using ICP based approach correspond-ing to best located nose tipThe experiments were performedon the MSU and the UND databases obtaining 962 and97 identifications rates respectively

The study [34] proposed Collective Shape DifferenceClassifier (CSDC) based approach using a summed confi-dence as a similarity measure The authors computed SignedShape Difference Map (SSDM) between two aligned 3D facesas a mediate depiction for comparison of facial shapes Theyused three types of features to encode the characteristics andlocal similarity between themThey constructed three strongclassifiers using most discriminative local facial features byboosting and training as weak classifiers The experimentswere carried out on FRGC v20 yielding verification ratesbetter than 979 at 0001 FAR and rank-1 recognition ratesabove 98

A study based on fusion of results acquired from severaloverlapping facial regions is proposed in the paper [15]employing decision level fusion (majority voting) PCA-LDAbased method was used for extraction of features whereaslikelihood ratio was used as matching criterion to classifythe individual regions The author conducted experimentsusing FRGC v20 3D database to evaluate efficacy of thealgorithm and reported 99 rank-1 recognition rate and946 verification rate at 01 FAR respectively

Another fusion based study is given in the paper [52]equipped with an approach where match scores of eachsubject were combined for both of 2D albedo and depthimages Experimental results are reported by employingPCA LDA and Nonnegative Matrix Factorization (NMF)based subspace methods and Elastic Bunch Graph Matching(EBGM) Among the experiments the best results werereported for sum rule based score level fusion The authorsachieved 89 recognition accuracy on a database of 261subjects

A recent region based study [27] proposed a method tohandle occlusions covering the facial surface employing twodatabases containing facial images with realistic occlusionsThe authors addressed two problems namely missing datahandling and occlusions and improved the classificationaccuracy at score level using product rule In the experiments100 classification results were obtained for neutral subsetswhereas in the same study pose expression and occlusionsubsets achieved relatively low classification accuracies

The study [53] proposed a facial recognition system (FRS)which employed fusion of three face classifiers using featureand match score level fusion methods The features used byclassifiers were extracted at facial contours around inner eyecorners and the nose tip The classification task was per-formed in LDA subspace by using Euclidean distance based

1NNclassifier Experimentswere performed on a coregistered2D-3D image database acquired from 116 subjects and rank-1recognition rate of 9909was obtained by the authors

A prominent algorithm based on fusion of 2D and 3Dfeatures is proposed in the study [54] which uses PCAemploying canonical correlation analysis (CCA) to learnmapping between a 2D image and its respective 3D scan Thealgorithm is capable of classifying a probe image (whether itis 2D or 3D) by matching it to a gallery image modeled byfusion of 2D and 3Dmodalities containing features frombothsides The authors performed experiments using a databaseof 115 subjects which contains neutral and expressive pairs of2D images and 3D scans They employed Euclidean distanceclassifier for the classification and obtained 55 classificationaccuracy using CCA algorithm alone Their results wereimproved to 85 by using CCA-PCA algorithm

The study [17] is a representativework of region based facerecognition methods The study proposed the use of facialrepresentation based on dual-tree complex wavelet transform(DT-CWT) and six subregions In this studyNNclassifierwasemployed in the classification stage and the authors achievedidentification rate of 986 for neutral faces on the FRGCv20database Similarly verification rate of 9953 at 01 FARwas obtained for neutral faces on the same database

A recent circular region based study [47] proposed aneffective 3D face keypoint detection andmatching frameworkusing three principle curvature measures The local shapeof the face around each 3D keypoint was comprehensivelydescribed by histograms of the principal curvature mea-sures Similarity comparison between facial surfaces wasestablished by matching local shape descriptors by sparserepresentation based reconstruction method and score levelfusion The evaluation of the algorithm was performed onGavabDB FRGC v20 and Bosphorus databases obtaining100 (neutral subset) 996 (neutral subset) and 986(pose subset) recognition rates respectively

The proposed study is focused on aligning the PFIemploying the PCF alignment algorithm It targets toenhance classification accuracies using complementary infor-mation obtained from d-MVAHF-based features acquiredfrom synthesized MVAHF images The results obtained fromour proposed methodology are better than the state-of-the-art studies [17 19 27 41ndash44] in terms of all the evaluationcriteria employed by these studies

3 Materials and Methods

The proposed system consists of face alignment identifi-cation and verification components implemented throughPCF alignment algorithm d-MVAHF and d-MVAHF-SVM-based methodologies respectively The following sectionsexplain the proposed algorithm in detail

31 The Proposed PCF Alignment Algorithm An illustrationof the PCF alignment algorithm is presented in Figure 2(a) Itemploys nose tip heuristic in the pose learning step and alignsthe PFI in xz yz and xy planes separately The procedureto determine the nose tip is described in the followingparagraphs

Mathematical Problems in Engineering 5

xy plane

xz plane

yz plane Pre-

proc

essin

g

Probe face image

(a)

MVWF MVRHFMVAHF

AHF RHF

MVLHF

LHF

FFRecognition

Softmax

Gal

lery

Imag

es

(b)

Probe scores computation

d-MVAHF feature vectors extracted

First face image Second face image

Training score

matrix T

G S

G S

I S

Training scores computation

G S

I S

Probe image

feature vector

Probe score matrix P

SVM

Recognition

I S

(c)

LDFIRotate

(0∘ to -30∘)

LOFIRotate

(0∘ to -30∘)

LUFIRotate

(0∘ to +30∘)

ROFIRotate

(0∘ to +30∘)

RPFRotate

(0∘ to -90∘)

LPFRotate

(0∘ to +90∘)

Rotate(-5∘ to +5∘)

Rotate(-5∘ to +5∘)

1

1

01

01

2

2

02

02

3

3

4

4 5 05

5 05

6

6

7

7 8

from 7 layer of AlexNet

from 7 layer31 31 33

0∘0∘

0∘0∘ 0∘

10∘10∘

10∘10∘

10∘

20∘20∘

20∘20∘

20∘

30∘30∘

30∘30∘

30∘

N11

N11

N11

N11

N1

N1

N1

N1

N1

N1

N1

N1

N01

N01

N01

N01

N0

N0

N0

N0

Rotate(-1∘ to +1∘)

Rotate(-1∘ to +1∘)

Rota

te(-

5∘to

+5∘

)

Rota

te(-

1∘to

+1∘

)

Figure 2 The proposed framework (a) PCF alignment algorithm (b) d-MVAHF-based face identification algorithm (c) d-MVAHF-SVM-based face verification algorithm

6 Mathematical Problems in Engineeringxmin

(a)

xmax

(b) (c) (d) (e) (f) (g) (h)

Figure 3 Examples of incorrectly detected nose tips on (a b) ears (c) lips area (d) z-axis noise (e) forehead hairs Nose templates (f) frontal(g) left (h) right

311 Nose Tip Detection Technique Nose tip detection is aspecific facial feature detection problem in depth imagesThe study [55] proposed a nose tip detection technique forFF images based on histogram initialization and trianglefitting and obtained a detection rate of 9943 on FRGC v20database In contrast to the study [55] the proposed studymarks the nose tip as the nearest captured point from 3Dscanner to the face and is used to localize align and crop thePFI Several problems were faced in detecting the nose tip asfollows

One of the problems was incorrect nose tip detection inLPF or RPF images where it was detected on ears or someother facial parts as shown on ear of RPF of subject Gav-abDB cara26 derecha and ear of LPF of subject GavabDBcara26 izquierda in Figures 3(a) and 3(b) respectively Inorder to handle this problem the PFI was first classified asFF LPF or RPFusing a convolutional neural network (CNN)and then nose tip was detected employing three differentstrategies for each of FF LPF or RPF The CNN was trainedfor a three-class problem for FF LPF or RPF classificationtask The PFI was used as input to the CNN which producedan N dimensional vector as the output where N is thenumber of classes The CNN architecture was comprised oftwo convolutional layers followed by batch normalizationand max pooling stages The CNN also included two fullyconnected layers at the end The first one contained 1024units while the second fully connected layer with three unitsperformed as the output layer with the softmax function Thearchitecture of the CNN for a PFI is shown in Figure 4 TheCNN classifies the PFI of size ℎ times 119908 as FF LPF or RPF usingthe final feature vector 119878 = 1198781199011 1198781199012 119878119901ℎ119901119908119901 computed forthe layer119901 Based on the classification of the PFI the nose tipis determined as follows

(1) For FF images the facial point at the minimumdistance from the 3D scanner along z-axis is markedas the nose tip

(2) For LPF the facial point having the minimum coor-dinate value along x-axis (xmin) is defined as the nosetip

(3) For RPF the facial point having the maximum coor-dinate value along x-axis (xmax) is marked as the nosetip

Another problem of the nose tip detection process wasincorrect detection of the nose tip in those subjects whichwere captured with leaning forward or backward faces In theleaning forward faces the nose tip was detected on foreheadwhereas in leaning backward faces it was detected on chin orlips area (See Figure 3(c) for subject FRGC v20 04233d510)Similarly noise scenarios played an adverse role in detectingthe nose tip For example in someof the face images the z-axisnoise occurring in the face acquisition process was markedas the nose tip as shown in Figure 3(d) for the subject FRGCv20 04217d461 Another such scenario was regarding femalesubjects where hairs on forehead or spread around their neckor ears were marked as the nose tip as shown in Figure 3(e)for the subject FRGC v20 04470d297

Such problems were handled by searching the nose tipin an approximate Region of Interest (ROI) The ROI on thealready classified FF LPF or RPF images was determined bymeasuring two features (i)maximumvalue of depthmap his-togram and (ii) maximum value of the correlation coefficientof Normalized Cross Correlation (NCC) The former featurewas measured using z -x and x depth map histograms foreach of the FF LPF or RPF in the respective order whereasthe latter was measured by correlating the correspondingfrontal left or right oriented nose templates (please see Fig-ures 3(f) 3(g) and 3(h) for subjectGavabDB cara26 frontal2izquierda and derecha respectively) with FF LPF or RPFimages The nose templates were randomly selected from tenrandomly chosen five male and female subjects each fromthe GavabDB database on satisfactory experimental resultsFor measuring the depth map histograms and correlationcoefficient values the PFI was rotated between 40∘ and -40∘with a step size of -40∘ around x-axis adjusting the y-axisorientation at 40∘ to -40∘with the same step size resulting intonine facial orientations The intuition behind this strategy isto search an upright position of the face because for such aposition maximum number of depth values accumulate intoa single bin of the depth map histogram and the correlationcoefficient of the NCC returns maximum value among allnine facial positions Consequently the nose tip was correctlydetected as the nearest captured point from 3D scanner to theface using an approximate ROI

The proposed algorithm correctly detected the nose tipsof face images from GavabDB Bosphorus UMB-DB andFRGC v20 databases including all those cases where the nose

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 5: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 5

xy plane

xz plane

yz plane Pre-

proc

essin

g

Probe face image

(a)

MVWF MVRHFMVAHF

AHF RHF

MVLHF

LHF

FFRecognition

Softmax

Gal

lery

Imag

es

(b)

Probe scores computation

d-MVAHF feature vectors extracted

First face image Second face image

Training score

matrix T

G S

G S

I S

Training scores computation

G S

I S

Probe image

feature vector

Probe score matrix P

SVM

Recognition

I S

(c)

LDFIRotate

(0∘ to -30∘)

LOFIRotate

(0∘ to -30∘)

LUFIRotate

(0∘ to +30∘)

ROFIRotate

(0∘ to +30∘)

RPFRotate

(0∘ to -90∘)

LPFRotate

(0∘ to +90∘)

Rotate(-5∘ to +5∘)

Rotate(-5∘ to +5∘)

1

1

01

01

2

2

02

02

3

3

4

4 5 05

5 05

6

6

7

7 8

from 7 layer of AlexNet

from 7 layer31 31 33

0∘0∘

0∘0∘ 0∘

10∘10∘

10∘10∘

10∘

20∘20∘

20∘20∘

20∘

30∘30∘

30∘30∘

30∘

N11

N11

N11

N11

N1

N1

N1

N1

N1

N1

N1

N1

N01

N01

N01

N01

N0

N0

N0

N0

Rotate(-1∘ to +1∘)

Rotate(-1∘ to +1∘)

Rota

te(-

5∘to

+5∘

)

Rota

te(-

1∘to

+1∘

)

Figure 2 The proposed framework (a) PCF alignment algorithm (b) d-MVAHF-based face identification algorithm (c) d-MVAHF-SVM-based face verification algorithm

6 Mathematical Problems in Engineeringxmin

(a)

xmax

(b) (c) (d) (e) (f) (g) (h)

Figure 3 Examples of incorrectly detected nose tips on (a b) ears (c) lips area (d) z-axis noise (e) forehead hairs Nose templates (f) frontal(g) left (h) right

311 Nose Tip Detection Technique Nose tip detection is aspecific facial feature detection problem in depth imagesThe study [55] proposed a nose tip detection technique forFF images based on histogram initialization and trianglefitting and obtained a detection rate of 9943 on FRGC v20database In contrast to the study [55] the proposed studymarks the nose tip as the nearest captured point from 3Dscanner to the face and is used to localize align and crop thePFI Several problems were faced in detecting the nose tip asfollows

One of the problems was incorrect nose tip detection inLPF or RPF images where it was detected on ears or someother facial parts as shown on ear of RPF of subject Gav-abDB cara26 derecha and ear of LPF of subject GavabDBcara26 izquierda in Figures 3(a) and 3(b) respectively Inorder to handle this problem the PFI was first classified asFF LPF or RPFusing a convolutional neural network (CNN)and then nose tip was detected employing three differentstrategies for each of FF LPF or RPF The CNN was trainedfor a three-class problem for FF LPF or RPF classificationtask The PFI was used as input to the CNN which producedan N dimensional vector as the output where N is thenumber of classes The CNN architecture was comprised oftwo convolutional layers followed by batch normalizationand max pooling stages The CNN also included two fullyconnected layers at the end The first one contained 1024units while the second fully connected layer with three unitsperformed as the output layer with the softmax function Thearchitecture of the CNN for a PFI is shown in Figure 4 TheCNN classifies the PFI of size ℎ times 119908 as FF LPF or RPF usingthe final feature vector 119878 = 1198781199011 1198781199012 119878119901ℎ119901119908119901 computed forthe layer119901 Based on the classification of the PFI the nose tipis determined as follows

(1) For FF images the facial point at the minimumdistance from the 3D scanner along z-axis is markedas the nose tip

(2) For LPF the facial point having the minimum coor-dinate value along x-axis (xmin) is defined as the nosetip

(3) For RPF the facial point having the maximum coor-dinate value along x-axis (xmax) is marked as the nosetip

Another problem of the nose tip detection process wasincorrect detection of the nose tip in those subjects whichwere captured with leaning forward or backward faces In theleaning forward faces the nose tip was detected on foreheadwhereas in leaning backward faces it was detected on chin orlips area (See Figure 3(c) for subject FRGC v20 04233d510)Similarly noise scenarios played an adverse role in detectingthe nose tip For example in someof the face images the z-axisnoise occurring in the face acquisition process was markedas the nose tip as shown in Figure 3(d) for the subject FRGCv20 04217d461 Another such scenario was regarding femalesubjects where hairs on forehead or spread around their neckor ears were marked as the nose tip as shown in Figure 3(e)for the subject FRGC v20 04470d297

Such problems were handled by searching the nose tipin an approximate Region of Interest (ROI) The ROI on thealready classified FF LPF or RPF images was determined bymeasuring two features (i)maximumvalue of depthmap his-togram and (ii) maximum value of the correlation coefficientof Normalized Cross Correlation (NCC) The former featurewas measured using z -x and x depth map histograms foreach of the FF LPF or RPF in the respective order whereasthe latter was measured by correlating the correspondingfrontal left or right oriented nose templates (please see Fig-ures 3(f) 3(g) and 3(h) for subjectGavabDB cara26 frontal2izquierda and derecha respectively) with FF LPF or RPFimages The nose templates were randomly selected from tenrandomly chosen five male and female subjects each fromthe GavabDB database on satisfactory experimental resultsFor measuring the depth map histograms and correlationcoefficient values the PFI was rotated between 40∘ and -40∘with a step size of -40∘ around x-axis adjusting the y-axisorientation at 40∘ to -40∘with the same step size resulting intonine facial orientations The intuition behind this strategy isto search an upright position of the face because for such aposition maximum number of depth values accumulate intoa single bin of the depth map histogram and the correlationcoefficient of the NCC returns maximum value among allnine facial positions Consequently the nose tip was correctlydetected as the nearest captured point from 3D scanner to theface using an approximate ROI

The proposed algorithm correctly detected the nose tipsof face images from GavabDB Bosphorus UMB-DB andFRGC v20 databases including all those cases where the nose

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 6: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

6 Mathematical Problems in Engineeringxmin

(a)

xmax

(b) (c) (d) (e) (f) (g) (h)

Figure 3 Examples of incorrectly detected nose tips on (a b) ears (c) lips area (d) z-axis noise (e) forehead hairs Nose templates (f) frontal(g) left (h) right

311 Nose Tip Detection Technique Nose tip detection is aspecific facial feature detection problem in depth imagesThe study [55] proposed a nose tip detection technique forFF images based on histogram initialization and trianglefitting and obtained a detection rate of 9943 on FRGC v20database In contrast to the study [55] the proposed studymarks the nose tip as the nearest captured point from 3Dscanner to the face and is used to localize align and crop thePFI Several problems were faced in detecting the nose tip asfollows

One of the problems was incorrect nose tip detection inLPF or RPF images where it was detected on ears or someother facial parts as shown on ear of RPF of subject Gav-abDB cara26 derecha and ear of LPF of subject GavabDBcara26 izquierda in Figures 3(a) and 3(b) respectively Inorder to handle this problem the PFI was first classified asFF LPF or RPFusing a convolutional neural network (CNN)and then nose tip was detected employing three differentstrategies for each of FF LPF or RPF The CNN was trainedfor a three-class problem for FF LPF or RPF classificationtask The PFI was used as input to the CNN which producedan N dimensional vector as the output where N is thenumber of classes The CNN architecture was comprised oftwo convolutional layers followed by batch normalizationand max pooling stages The CNN also included two fullyconnected layers at the end The first one contained 1024units while the second fully connected layer with three unitsperformed as the output layer with the softmax function Thearchitecture of the CNN for a PFI is shown in Figure 4 TheCNN classifies the PFI of size ℎ times 119908 as FF LPF or RPF usingthe final feature vector 119878 = 1198781199011 1198781199012 119878119901ℎ119901119908119901 computed forthe layer119901 Based on the classification of the PFI the nose tipis determined as follows

(1) For FF images the facial point at the minimumdistance from the 3D scanner along z-axis is markedas the nose tip

(2) For LPF the facial point having the minimum coor-dinate value along x-axis (xmin) is defined as the nosetip

(3) For RPF the facial point having the maximum coor-dinate value along x-axis (xmax) is marked as the nosetip

Another problem of the nose tip detection process wasincorrect detection of the nose tip in those subjects whichwere captured with leaning forward or backward faces In theleaning forward faces the nose tip was detected on foreheadwhereas in leaning backward faces it was detected on chin orlips area (See Figure 3(c) for subject FRGC v20 04233d510)Similarly noise scenarios played an adverse role in detectingthe nose tip For example in someof the face images the z-axisnoise occurring in the face acquisition process was markedas the nose tip as shown in Figure 3(d) for the subject FRGCv20 04217d461 Another such scenario was regarding femalesubjects where hairs on forehead or spread around their neckor ears were marked as the nose tip as shown in Figure 3(e)for the subject FRGC v20 04470d297

Such problems were handled by searching the nose tipin an approximate Region of Interest (ROI) The ROI on thealready classified FF LPF or RPF images was determined bymeasuring two features (i)maximumvalue of depthmap his-togram and (ii) maximum value of the correlation coefficientof Normalized Cross Correlation (NCC) The former featurewas measured using z -x and x depth map histograms foreach of the FF LPF or RPF in the respective order whereasthe latter was measured by correlating the correspondingfrontal left or right oriented nose templates (please see Fig-ures 3(f) 3(g) and 3(h) for subjectGavabDB cara26 frontal2izquierda and derecha respectively) with FF LPF or RPFimages The nose templates were randomly selected from tenrandomly chosen five male and female subjects each fromthe GavabDB database on satisfactory experimental resultsFor measuring the depth map histograms and correlationcoefficient values the PFI was rotated between 40∘ and -40∘with a step size of -40∘ around x-axis adjusting the y-axisorientation at 40∘ to -40∘with the same step size resulting intonine facial orientations The intuition behind this strategy isto search an upright position of the face because for such aposition maximum number of depth values accumulate intoa single bin of the depth map histogram and the correlationcoefficient of the NCC returns maximum value among allnine facial positions Consequently the nose tip was correctlydetected as the nearest captured point from 3D scanner to theface using an approximate ROI

The proposed algorithm correctly detected the nose tipsof face images from GavabDB Bosphorus UMB-DB andFRGC v20 databases including all those cases where the nose

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 7: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 7

Dropout

NetworkOutput

FF LPF RPF

Figure 4 Illustration of CNN for FF LPF and RPF classification task

171

29

61 61

126

9

79

210 210

29

1 11022

20

50

100

150

200

250

Num

ber o

f Sub

ject

s with

inco

rrec

tly d

etec

ted

nose

tip

Facial regions

GavabDBBosphorus

UMB-DBFRGC v20

Forehead Lips Chin LPF RPF

Figure 5 Incorrectly detected nose tips without employing the proposed nose tip detection technique

tip was incorrectly detected at forehead lips chin LPF orRPF as detailed in Figure 5

312 FaceAlignmentAlgorithm It wasmentioned in the startof this section that the PCF alignment algorithm aligns thePFI in xz yz and xy planes separately The alignment inxz and yz planes employs L2 norm minimization calculatedbetween the nose tip and the 3D scanner The alignmentin xy plane employs a different strategy based on L2 normminimization calculated between the LHF image and flippedRHF image

In order to explain the PCF alignment algorithm in xzand yz planes the PFI is shown in Figure 6 with three nosetip positions 1 2 and 3 in both planes separately Intuitivelyit can be observed in Figure 6 that the face image is alignedwhen the nose tip is set in line with the optic axis of the3D scanner at position 1 Conversely when it is not in linewith the optic axis of the 3D scanner at position 2 or 3 theface image is not aligned It can be observed in Figure 6 thatL2 norm at nose tip position 1 is a perpendicular from thenose tip to the 3D scanner which is not the case at nose tippositions 2 and 3The perpendicular distance from a point ona line is always the shortest which leads to the conclusion thatwhen PFI is aligned at position 1 the L2 norm is computed asthe minimum and shorter than the corresponding values ofL2 norms at positions 2 and 3 Therefore alignment of thePFI causes an essential reduction in the L2 norm computedbetween the nose tip and the 3D scanner The L2 norm

between nose tip position 1 (N(11989811198991)) and the 3D scannerpoint S(11989801198990) is calculated as given in equation (1)

1198892 = radic(1198981 minus 1198980)2 + (1198991 minus 1198990)2 (1)

313 Alignment in xz Plane

(1) Pose Learning First of all the capture pose of the probeface image is learned to determine whether to rotate itclockwise or anticlockwise to align it at minimum L2 normFor this purpose only the nose tip of the probe face imageis rotated clockwise at -1∘ and corresponding L2 norm ismeasured between nose tip and 3D scanner For example anose tip oriented at -1∘ or 30∘ is rotated clockwise at -2∘ or29∘ respectively to measure the L2 norm It is notable that anegative angle of rotation (eg -2∘) turns a probe face image(Figure 7(a)) clockwise in xz and yz planes and anticlockwisein xy plane as shown in Figures 7(b)ndash7(d)

As a result of clockwise rotation if L2 norm is decreased(Figure 8(a)) the probe face image is classified as left orientedface image (LOFI) (Figure 8(c)) Similarly if L2 norm isincreased (Figure 8(b)) the probe face image is classified asright oriented face image (ROFI) as shown in Figure 8(d)Please note that rotating the nose tip at 1∘ instead of -1∘ adecrease in L2 norm classifies the probe face image as ROFIwhereas an increase in L2 norm classifies it as LOFI In thisstudy we adjust this parameter at -1∘

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 8: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

8 Mathematical Problems in Engineering

31

2

Optic axis

(a)

2

1

3

Optic axis

(b)

Figure 6 PCF alignment algorithm showing an aligned image at minimum L2 norm in (a) xz (b) yz plane

(a) (b) (c) (d)

Figure 7 (a) 3D scan of subject FRGC v2004233d396 rotated in (b) xz (c) yz (d) xy plane at -2∘

12

(a)

12

(b) (c) (d)

(e) (f)

Figure 8 (a b) Pose learning in xz plane (c) LOFI (d) ROFI (e) LPF (f) RPF (a b c d) Subject FRGC v20 04221d553 (e f) subjectGavabDB cara1 izquierda derecha

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 9: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 9

(2) Coarse Alignment

(i) LOFI based on the outcome of the above step thenose tip of a LOFI is rotated in the range of 0∘ to -30∘(clockwise) with a step size of -10∘ and correspondingL2 norms are recorded For example if a LOFI iscaptured at an orientation of 30∘ the nose tip isrotated between (30∘ + 0∘ =30∘) and (30∘ + (-30∘)=0∘) Similarly the nose tip of a LOFI captured at anorientation of 1∘ is rotated between (1∘ + 0∘ =1∘) and (1∘+ (-30∘) = -29∘) In both cases the nose tip is alignedat 0∘ corresponding to minimum L2 norm Howeverthe nose tips of the LOFI captured at 29∘ 28∘ 27∘ 26∘25∘ 24∘ 23∘ 22∘ and 21∘ do not pass through the 0∘position therefore they are aligned at -1∘ -2∘ -3∘ -4∘-5∘ +5∘ +4∘ +3∘ +2∘ and +1∘ respectively (please seeTable 1) and are aligned in step 3 at fine level

(ii) ROFI the nose tip of a ROFI is rotated in the rangeof 0∘ to +30∘ (anticlockwise) with a step size of 10∘and corresponding L2 norms are recorded For aROFIcaptured at an orientation of -30∘ or -1∘ the nose tipis rotated between (-30∘ +0∘ =-30∘) to (-30∘ +30∘ =0∘)and (-1∘ +0∘ =-1∘) to (-1∘ +30∘ =29∘) respectively Thenose tip is aligned at 0∘ corresponding tominimumL2norm in both of the cases However the nose tips ofthe ROFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘-23∘ -22∘ and -21∘ are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘-4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1)and are aligned in step 3 at fine level

(iii) LPF the nose tip of an LPF (Figure 8(e)) is rotated inthe range of 0∘ to +90∘ (anticlockwise) with a step sizeof 10∘ and corresponding L2 norms are recorded Foran LPF captured at an orientation of -90∘ the nose tipis rotated between (-90∘ +0∘ =-90∘) and (-90∘ + 90∘=0∘) and is aligned at 0∘ corresponding to minimumL2 norm However the nose tips of the LPF capturedat -89∘ -88∘ -87∘ -86∘ -85∘ -84∘ -83∘ -82∘ and -81∘are aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(iv) RPF the nose tip of a RPF (Figure 8(f)) is rotated inthe range of 0∘ to -90∘ (clockwise) with a step size of-10∘ and corresponding L2 norms are recorded If aRPF is captured at an orientation of 90∘ the nose tip isrotated between (90∘ + 0∘ = 90∘) and (90∘ + (-90∘) =0∘)and is aligned at 0∘ corresponding to minimum L2norm However the nose tips of the RPF captured at89∘ 88∘ 87∘ 86∘ 85∘ 84∘ 83∘ 82∘ and 81∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level

Please note that for a ROFI captured at -25∘ a LOFIcaptured at 25∘ an LPF captured at -85∘ or a RPF capturedat 85∘ the nose tip can get aligned at 5∘ or -5∘ becauseminimum L2 norm is equal at both orientations However wehave aligned the nose tip at 5∘ in this study The face imagescaptured at plusmn75∘ plusmn65∘ plusmn5∘ are aligned using the samealignment procedure

(3) Fine Alignment Thenose tip of the LOFI ROFI LPF andRPF is rotated in the range of -5∘ to 5∘ with a step size of1∘ This means that nose tip aligned at -5∘ is rotated between((-5∘) + (-5∘) = -10∘) and ((-5∘) + (5∘) = 0∘) to catch the 0∘position On the other hand the nose tip aligned at 5∘ isrotated between ((5∘) + (-5∘) = 0∘) and ((5∘) + (5∘) = 10∘) tocatch the 0∘ position After aligning the nose tip at 0∘ it isrotated in the range of -1∘ to 1∘ with a step size of 01∘ to achievean accurate final alignment at a minimum L2 norm Finallythe whole probe face image is rotated and aligned at an anglecorresponding to the alignment of the nose tip ie if the nosetip is aligned at 13∘ then the whole face image is rotated at 13∘and is finally aligned in xz plane

314 Alignment in yz Plane

(1) Pose Learning In yz plane the capture pose of the probeface image aligned in xz plane is learned at first to align itat a minimum L2 norm For this purpose only nose tip ofthe probe face image is rotated upwards (clockwise) at -1∘ andcorresponding L2 norm is measured If L2 norm is decreased(Figure 9(a)) the probe face image is classified as lookingdown face image (LDFI) (Figures 9(c) and 9(d)) On the otherhand if L2 norm is increased (Figure 9(b)) it is classified aslooking up face image (LUFI) as shown in Figures 9(e) and9(f) Please note that rotating the nose tip at 1∘ instead of -1∘a decrease in L2 norm classifies a probe face image as LUFIwhereas an increase in L2 norm classifies it as LDFI In thisstudy we adjust this parameter at -1∘

(2) Coarse Alignment

(i) LUFI in coarse alignment phase the nose tip of aLUFI is rotated in the range of 0∘ to +30∘ downwards(anticlockwise) with a step size of 10∘ and correspond-ing L2 norms are recorded If a LUFI is captured atan orientation of -30∘ the nose tip is rotated between-30∘ and 0∘ If a LUFI is captured at an orientationof -1∘ the nose tip is rotated between -1∘ and 29∘ Inboth cases the nose tip is aligned at 0∘ correspondingto minimum L2 norm However the nose tips of theLUFI captured at -29∘ -28∘ -27∘ -26∘ -25∘ -24∘ -23∘-22∘ and -21∘ do not pass through 0∘ position Theyare aligned at 1∘ 2∘ 3∘ 4∘ 5∘ -5∘ -4∘ -3∘ -2∘ and -1∘ respectively (please see Table 1) and are aligned instep 3 at fine level

(ii) LDFI the nose tip of a LDFI is rotated in the rangeof 0∘ to -30∘ upwards (clockwise) with a step sizeof -10∘ and corresponding L2 norms are recordedFor a LDFI captured at an orientation of 30∘ or 1∘the nose tip is rotated between 30∘ to 0∘ and 1∘to -29∘ respectively The nose tip is aligned at 0∘corresponding to minimum L2 norm in both of thecases However the nose tips of the LDFI captured at29∘ 28∘ 27∘ 26∘ 25∘ 24∘ 23∘ 22∘ and 21∘ are alignedat -1∘ -2∘ -3∘ -4∘ -5∘ +5∘ +4∘ +3∘ +2∘ and +1∘respectively (please see Table 1) and are aligned instep 3 at fine level It is worth mentioning that theface images captured at plusmn25∘ plusmn15∘ plusmn5∘ are handled

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 10: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

10 Mathematical Problems in Engineering

Table 1 Acquisition pose of the face and respective alignment positions given in bold case (all values in degrees)

LPF RPFLOFILDFI ROFILUFI

90 80 70 60 50 40 30 20 10 0 -90 -80 -70 -60 -50 -40 -30 -20 -10 089 79 69 59 49 39 29 19 9 -1 -89 -79 -69 -59 -49 -39 -29 -19 -9 188 78 68 58 48 38 28 18 8 -2 -88 -78 -68 -58 -48 -38 -28 -18 -8 287 77 67 57 47 37 27 17 7 -3 -87 -77 -67 -57 -47 -37 -27 -17 -7 386 76 66 56 46 36 26 16 6 -4 -86 -76 -66 -56 -46 -36 -26 -16 -6 485 75 65 55 45 35 25 15 5 -5 -85 -75 -65 -55 -45 -35 -25 -15 -5 584 74 64 54 44 34 24 14 4 -6 -84 -74 -64 -54 -44 -34 -24 -14 -4 683 73 63 53 43 33 23 13 3 -7 -83 -73 -63 -53 -43 -33 -23 -13 -3 782 72 62 52 42 32 22 12 2 -8 -82 -72 -62 -52 -42 -32 -22 -12 -2 881 71 61 51 41 31 21 11 1 -9 -81 -71 -61 -51 -41 -31 -21 -11 -1 9

1

2

(a)

1

2

(b) (c) (d)

(e) (f)

Figure 9 (a b) Pose learning in yz plane (c d) LDFI (e f) LUFI (a b c e) Subject FRGC v20 04221d553 (d f) subject GavabDBcara1 izquierda derecha

using the alignment procedure mentioned in coarsealignment phase of xz plane

(3) Fine Alignment The nose tip of LUFI or LDFI is rotatedin the range of -5∘ to 5∘ with a step size of 1∘ to catch the0∘ position as discussed in fine alignment phase of xz planeSimilarly in order to align the nose tip at fine level it is rotatedin the range of -1∘ to 1∘ with a step size of 01∘ to achieve anaccurate final alignment at a minimum L2 norm In the endwhole probe face image is rotated at an angle corresponding

to the alignment of the nose tip and is finally aligned in yzplane

315 Alignment in xy Plane

(1) Coarse Alignment The PFI is rotated in the range of -5∘to +5∘ with a step size of 1∘ around z-axis For each rotationit is cropped into LHF and RHF images using the nosetip heuristic The flipped RHF image is shifted along LHFimage in xy plane and corresponding L2 norm is computed

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 11: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 11

for each rotation at pixel values of the same grid position119875119894119895 In order to rule out the outliers due to z-axis noisepixel values less than a threshold 119879 are considered in the L2norm computation as given in equation (2)The face image iscoarsely aligned at an angle corresponding to the minimumvalue of L2 norm which represents a good match

119875119894119895 = 0 119875119894119895 gt 119879119875119894119895 119900119905ℎ119890119903119908119894119904119890 (2)

(2) Fine Alignment The face image is aligned at fine level byrotating it in the range of -1∘ to +1∘ with a step size of 01∘using the procedure described aboveTheLPF andRPFwhichcome up as LHF and RHF images after alignment in xz andyz planes (see Figures 9(d) and 9(f)) are aligned in xy planein a similar fashion

32 d-MVAHF-Based 3D Face Recognition For face recog-nition the depth images were preprocessed to deal withnoise and gap based artifacts The sharp spikes presentedin depth face images due to the face capture process wereremoved using median filtering Finally the facial holeswere filled employing interpolation and facial irregularitieswere smoothed through low pass filtering at the end Thealigned whole face images were then rotated at 0∘ plusmn10∘ plusmn20∘and plusmn30∘ to synthesize MVWF images Similarly LHF andRHF images were rotated at 0∘ -10∘ -20∘ and -30∘ and 0∘10∘ 20∘ and 30∘ around y-axis to synthesize MVLHF andMVRHF images respectively MVLHF images were flippedand shifted along respective MVRHF images such that theywere completely overlapped (flipped MVRHF images canalso be shifted along MVLHF images equally) Subsequentlyfacial depth values on the same grid positions were averagedand complementary facial feature information provided bythe nonoverlapping facial regionswas retained to obtainmorecomplete global information for each view separately Theoutcome of the whole process was a set of four MVAHFimages oriented at 0∘ 10∘ 20∘ and 30∘ The motivationbehind using MVAHF images instead of MVWF images isas follows (i) Facial feature information carried by a halfface image is similar to that of the flipped other half faceimage due to intrinsic facial symmetry of the LHF and RHF(ii) RHF region is gradually occluded by rotating a wholeface image at -10∘ -20∘ and -30∘ Similarly LHF regionis occluded by rotating the whole face image at 10∘ 20∘and 30∘ The occluded face regions poorly contribute in theface recognition process On the other hand computationalcomplexity of the system is two-fold (iii) The multiview3D information corresponding to MVWF images remainsavailable by combining the facial information obtained fromMVLHF and MVRHF images into MVAHF images (iv)The synthesized MVAHF images provide stable features toevaluate the local variations and also include feature infor-mation from occluded facial regions less visible in frontalview images Figure 10 readily shows the complementary faceinformation through example synthesized MVAHF imagesemployed for improving the face recognition accuracy

321 d-MVAHF-Based Face Identification Algorithm Anoverview of the proposed d-MVAHF-based 3D face recogni-tion algorithm is given in Figure 2(b) To extract d-MVAHFfeatures using dCNN an MVAHF image of the size ℎ times 119908is processed through a deep network architecture knownas AlexNet [56] A pretrained AlexNet based deep networkarchitecture was selected because of its better performanceAlexNet consists of five convolutional layers represented asC1 C2 C3 C4 C5 followed by three pooling layers denotedby P1 P2 P3 and three fully connected layers indicatedby f6 f7 f8 Fully connected layers employ dropouts forregularization Each convolutional layer is followed by arectified linear unit (ReLU) The AlexNet architecture isgraphically represented in Figure 2(b) The MVAHF-basedfacial features are extracted using the second to last fullyconnected layers followed by the normalization process Theoutput of layer k is a set 119860119896 = 1198861198961 1198861198962 1198861198963 119886119896119899 of MVAHF-based facial features

The procedure for implementing the proposed approachis outlined as follows

(1) For each MVAHF image a 2048-dimensional d-MVAHF feature vectorwas extracted from the f7 layerof AlexNet

(2) Matching scores between probe and gallery MVAHFimages were calculated by comparing the respectiveL2 normalized d-MVAHF feature vectorsThematch-ing scores were arranged as amatching-scorematrix Sof size m times n where m and n denote the size of probeand gallery sets in the respective order The matrixS has a negative polarity reflecting that lower valuesof matching scores represent higher level of similaritybetween the probe and gallery images and vice versaThis step produced four matching-score matrices Sjfor each of the normalized d-MVAHF feature vectorscorresponding to AHF images oriented at 0∘ 10∘ 20∘and 30∘

(3) Each of the matching-score matrices Sj was normal-ized before fusion in f8 layer of the AlexNet For scorenormalization min-max normalization rule was uti-lized to normalize each row for mapping originalscores distribution to the interval [0 1] If maximumand minimum row specific values of raw matchingscores are 119898119886119909(119878119895119903119900119908) and 119898119894119899(119878119895119903119900119908) respectivelythen normalized scores are computed as given inequation (3)

119878119895119903119900119908 = 119878119895119903119900119908 minus 119898119894119899 (119878119895119903119900119908)119898119886119909 (119878119895119903119900119908) minus 119898119894119899 (119878119895119903119900119908) (3)

(4) The four normalized matching-score matrices cor-responding to the four MVAHF images were thenfused using score based fusion to produce a combinedmatching-score matrix 119878119903119900119908 as given in equation (4)

119878119903119900119908 = 4sum119895=1

119908119895119878119895119903119900119908 (4)

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 12: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

12 Mathematical Problems in Engineering

0∘10∘20∘30∘

(a)0∘ -10∘ -20∘ -30∘

(b)

Figure 10 3D scan of subject FRGC v20 04221d553 (a) RHF images (b) LHF images

where 119908119895 represents the weight assigned to thejth MVAHF image using the recognition accuraciesobtained from MVAHF images as given in equation(5)

119908119895 = 119903119895sum4119895=1 119903119895 (5)

where 119903119895 represents the recognition accuracies of thejth MVAHF image against the gallery We can usethe recognition accuracies in test phase as a givenPFI is first converted into MVAHF images orientedat 0∘ 10∘ 20∘ and 30∘ Then each of the mentionedMVAHF images is classified against the gallery andleads to four recognition accuracies which are sub-sequently used to compute the weights in equation(5) This procedure is similar as employed for eachof the training images in the training phase Forexample if the recognition accuracies obtained fromMVAHF images oriented at 0∘ are maximum then thecorresponding matching score matrix is assigned themaximum weight The matching score matrix 119878119903119900119908was again normalized as 1198781015840119903119900119908 using the min-max ruleas given in equation (3)

(5) The normalized matching scores obtained from 1198781015840119903119900119908were utilized in the Softmax layer of the AlexNet tocompute the final recognition accuracies

(6) The whole process was repeated to classify MVWFMVLHF and MVRHF images

322 d-MVAHF-SVM-Based Face Verification AlgorithmFor a binary classification problem such as face verifica-tion SVM aims to employ a hyperplane 119908119909 + 119887 = 0having maximum margins termed as optimal separatinghyper plane (OSH) that separates training vectors of twoclasses (1199091 1199101) (119909119894 119910119894) where 119909119894120598119877119899 and1199101198941205981 minus1 in ahigher dimensional spaceThe objective function of the formgiven in equation (6) is minimized to obtain the OSH withconstraints 119910119894[(119908119909119894) + 119887] ge 1 minus 120585119894 120585119894 ge 0 for 119894 = 1 119896

0 (119908 120585) = 12 1199082 + 119862119896sum119894=1

120585119894 (6)

where 120585119894 are slack variables used to penalize errors if thedata are not linearly separable and C is the regularization

constantNow sign of the followingOSH surface function canbe used to classify a test point

119891 (119909) = 119896sum119894=1

119910119894119886119894119870 (119909 119909119894) + 119887 (7)

where 119886119894 ge 0 are corresponding support vectors Lagrangianmultipliers and 119887 is determined by above-mentioned opti-mization problem In equation (7)119870 is the kernel trick usedto transform nonseparable data onto a higher dimensionalspace where it becomes linearly separable by a hyperplane119909119894 is the ith training sample and 119909 is the test sample It isexperimentally observed in this study that radial basis func-tion (RBF) kernel based SVM produces better recognitionaccuracies than the linear SVM and is of the form given inequation (8) where 1205902 is spread of RBF

119870 (119909 119909119894) = exp[minus 1003817100381710038171003817119909 minus 1199091198941003817100381710038171003817221205902 ] (8)

The proposed face verification algorithm employs d-MVAHF-SVM-based classification approach using two neu-tral face images of each subject In order to train the SVMMahCos scores were computed between four d-MVAHF fea-ture vectors of each image extracted using AlexNet as shownin Figure 2(b) MahCos score between two vectors s and t ofimage space is defined as the Cosine score calculated in theMahalanobis space as given in equations (9) and (10) [57]

119889119872119886ℎ119862119900119904 (119904 119905) = minus 119898119899|119898| |119899| = minussum119873119894=1 (119898119894119899119894)

radicsum119873119894=1 (119898119894)2radicsum119873119894=1 (119899119894)2

= minus sum119873119894=1 ((119904119894120590119894) (119905119894120590119894))radicsum119873119894=1 (119904119894120590119894)2radicsum119873119894=1 (119905119894120590119894)2

(9)

where 119898119894 = 119904119894120590119894 119899119894 = 119905119894120590119894 and 120590119894 is standard deviationof ith dimension In this case higher similarity yields higherscoreThus the actual MahCos score is computed as given inequation (10)

119863119872119886ℎ119862119900119904 (119904 119905) = 1 minus 119889119872119886ℎ119862119900119904 (119904 119905) (10)

Referring to Figure 2(c) MahCos scores were computedbetween the first neutral image of each subject and second

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 13: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 13

neutral image of the whole galley G The scores were com-puted by using (training gallery) pairs of d-MVAHF featurevectors for images oriented at (0∘ 0∘) (10∘ 10∘) (20∘ 20∘)and (30∘ 30∘) to populate rows 1 to 4 of a training scorematrix T Each element tij represents the score computedbetween d-MVAHF feature vectors of image i to image jwhere i j isin 1 2 G The element tij (for i = j) representsgenuine MahCos score computed between an image anditself whereas the scores tij (for i = j) represent imposterscores The genuine scores (eg t11) and the imposter scores(eg t1G) corresponding to all four orientations constitute 4 times1 dimensional column vectors of genuine and imposter scoresand are referred to as training vectors For an example galleryof 20 subjects there will be G timesG (400) total G (20) genuineand G2 ndashG (380) imposter training score vectors

In the classification phase MahCos probe scores werecomputed between the d-MVAHF feature vector of PFIand second neutral image of the whole galley as shown inFigure 2(c)The computed (probe gallery) scores between d-MVAHF feature vector pairs of images oriented at (0∘ 0∘)(10∘ 10∘) (20∘ 20∘) and (30∘ 30∘) were used to populate rows1 to 4 of the probe score matrix P with 4 times 1 dimensionalone genuine and Gndash1 probe score vectors (see Figure 2(c))Based on the training of genuine and imposter d-MVAHFfeature vectors the SVM classifies the PFI against the gallerySimilar procedure was adopted to classify MVWF MVLHFand MVRHF images

4 Results

The objective of this component of the study is to investigatethe performance of proposed face alignment and recognitionalgorithm Four databases namely GavabDB BosphorusUMB-DB and FRGC v20 are employed in the experimentsOn each of these databases face alignment identificationand verification experiments are conducted to implementthe proposed methodology In the face identification andverification experiments the performance is reported as rank-1 identification rate and verification rate at 01 false acceptrate (FAR) in the respective order The considered 3D facedatabases GavabDB [36] Bosphorus [38] UMB-DB [39]and FRGC v20 [40] are reviewed in the following sectionalong with description of the experiments and results

41 3D Face Databases

GavabDB Database The GavabDB [36] database con-tains 549 3D facial images acquired using Minolta VI-700 laser sensor from 45 male and 16 female Caucasiansubjects Each subject is acquired 9 times under variousfacial expressions and large pose variations The databasecontains six neutral images for each subject among whichtwo named ldquocarai frontal1rdquo and ldquocarai frontal2rdquo are cap-tured under frontal view Another two are taken where asubject is looking up or down at angles +35∘ or -35∘ namedldquocarai arribardquo and ldquocarai abajordquo respectively Remaining twoneutral images are scanned from right or left side at angles+90∘ or -90∘ respectively which are named ldquocarai derechardquoand ldquocarai izquierdardquo respectively The three nonneutral

images ldquocarai gestordquo ldquocarai risardquo and ldquocarai sonrisardquo presenta random gesture chosen by the subjects accentuated laughand a smile respectively The GavabDB database carriesseveral types of facial variations including variations in poseexpressions occlusions and resolution

The Bosphorus Database The Bosphorus database [38] is amultipose 3D face database constructed to enable testing ofrealistic and extreme pose variations expression variationsand typical occlusions that may occur in real life Each subjectis captured with approximately 13 poses 34 expressions(such as happiness sadness and surprise) and 4 occlusionsThe database contains a total of 4666 scans collected from61 male and 44 female subjects including 29 professionalactorsactresses The 3D scans were acquired using InspeckMega Capturor II 3D and processed to remove holes andspikes and to crop the facial area

UMB-DB Database The UMB-DB database [39] is com-posed of 1473 3D depth images of 142 [27] subjects including98 male and 45 female subjects mostly in the age rangeof 19 to 50 years Almost all of the acquired subjects areCaucasian with a few exceptions Each subject is includedwith aminimum of three neutral nonneutral (angry smilingand bored) and occluded acquisitions with a size of 480times640TheMinoltaVivid 900 laser scanner is used to capture 2D and3D images simultaneously Face images have been capturedin several indoor locations with uncontrolled lighting condi-tions The database is released without any processing such asnoise reduction or hole filling

FRGC v20 Database FRGC v20 3D database [40] is apublically available license based database It supports 6experiments among which our study is focused on Exper-iment 3 designed for 3D shape and texture analysis Theface scans are acquired at varying lengths from the scannerwith variable resolution frontal view and minimal posevariations by a Minolta Vivid 900910 series sensor Thescans are available in the form of four matrices of the size480 x 640 The matrices represent x y z coordinates offaces and a binary representation showing valid points ofthe x y z matrices (whereas z is the facial distance fromthe scanner) The database contains male and female subjectsaged 18 years and above About sixty percent of the subjectscarry neutral expressions and others carry expressions ofhappiness sadness surprise disgust and inflated cheeksSome of the subjects carry occlusions (such as hair spikesand holes on face) but none of them is wearing glasses [58]

42 Face Alignment Experiments Using the proposed PCFalgorithm alignment experiments are performed on Gav-abDB Bosphorus UMB-DB and FRGC v20 databases toalign the faces at the minimum L2 norm between nose tipand 3D scanner In order to evaluate the alignment accuracyof face images there is no existing evaluation criterionOne method that can be employed is human judgment buthuman judgment method is not automatic Therefore L2norm minimization evaluation method is employed in this

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 14: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

14 Mathematical Problems in Engineering

0

02

04

06

08

1

1 2 3 4 5Subjects

2

norm

Unaligned GavabDB Unaligned Bosphorus Unaligned UMB-DB

Unaligned FRGC v20 Aligned

Figure 11 PCF algorithm based minimized L2 norms shown for five subjects after alignment

(a) (b) (c) (d) (e) (f) (g) (h) (i)

(j) (k) (l) (m) (n) (o) (p) (q) (r)

Figure 12 Example 3D face images original (rows 1 3) aligned (rows 2 4)

study It is observed in the experiments that the results ofthe L2 norm minimization evaluation method and manualjudgment are quite similar and that the mentioned method isa promising automatic criterion to check alignment accuracy

The minimized and normalized L2 norms for five un-aligned images of subjects GavabDB cara1 gesto to cara2abajo Bosphorus bs000 E DISGUST 0 to bs000 ESURPRISE 0 UMB-DB 000006 0190 F BO F to 0000120024 M AN F and FRGC v20 04203d436 to 04203d444 areshown in Figure 11 Figure 12 depicts example original aswell as aligned face images from GavabDB cara1 (a) abajo(b) arriba (c) frontal1 (d) frontal2 (e) derecha (f) izquierda (g)gesto (h) risa (i) sonrisa Bosphorus (j) bs017 E DISGUST 0(k) bs001 E ANGER 0 (l) bs000 YR R20 0 UMB-DB(m) 001409 0002 M NE F (n) 001433 0010 M BO F (o)001355 0001 M AN F and FRGC v20 (p) 04217d399 (q)04482d418 (r) 04387d322 respectively The proposed PCFalignment algorithm accurately aligned and minimized L2

norms of 9982 100 (nonoccluded) 100 and 9995subjects from GavabDB Bosphorus UMB-DB and FRGCv20 databases respectively

43 Face Recognition Experiments The protocols and resultsof face recognition experiments are given using fourdatabases as follows

431 Experiments on GavabDB Database

(1) For the identification setup experimental protocolof [46] is considered to perform N vs N experi-ments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images According to the mentioned pro-tocol the image ldquofrontal1rdquo belonging to each of 61subjects is enrolled in the gallery whereas the imagesldquofrontal2rdquo rotated looking down and rotated lookingup are used as probe sets

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 15: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 15

Table 2 Unweighted (U) and weighted (W) recognition rates () using GavabDB database

Rank-1 Identification rates Verification ratesProposed methodology FF Rotated looking up Rotated looking down LPF RPF

U W U W U W U W U Wd-MVWF 967 100 967 100 951 984 - - 100d-MVLHF 951 984 934 967 934 967 918 951 - 967d-MVRHF 934 967 951 984 918 951 - 803 836 984d-MVAHF 967 100 967 100 951 984 - - 100

Table 3 Unweighted (U) and weighted (W) rank-1 identification rates () using Bosphorus and UMB-DB databases

Proposed methodology

Bosphorus UMB-DB

FF YR1 lt 90∘ YR= 90∘ Overall Frontal Face525 images 210 images 1365 images

U W U W U W U W U Wd-MVWF 971 100 922 954 - 931 96 965 993d-MVLHF 952 981 914 945 843 871 918 949 937 972d-MVRHF 962 99 91 941 913 944 944 979d-MVAHF 971 100 922 954 - 931 96 965 9931YR is yaw rotation (along y-axis in xz plane)

(2) For identification of profile face images this studyemploys d-MVLPF and d-MVRPF images for each ofthe 61 subjects

(3) For evaluation of face verification algorithm theprotocol used in the study [44] is followed whereldquofrontal1rdquo image of each subject is enrolled in thegallery to follow the experimental protocol men-tioned for this database and the image ldquofrontal2rdquo isused as probe Referring to Section 322 two neutralimages per subject are used to calculate d-MVWF d-MVLHF d-MVRHF and d-MVAHF-based trainingscores for SVM classifier in the training phaseThere-fore the neutral image ldquoabajordquo is included as secondimage along with ldquofrontal1rdquo of the gallery for com-puting pairwise training scores whereas ldquofrontal2rdquoand ldquofrontal1rdquo are used for pairwise probe scorecalculation for N vs N verification experiments Theface identification and verification performance of theproposed methodology for N vs N experiments isgiven in Table 2

432 Experiments on Bosphorus Database Using Bosphorusdatabase the proposed d-MVAHF identification algorithmis evaluated by performing N vs N experiments on d-MVWF d-MVLHF d-MVRHF and d-MVAHF images usingexperimental protocol of the study [27] In the mentionedprotocol the gallery set consists of first neutral scan of eachsubject (105 scans) whereas the probe set is created using theremaining 194 neutral scans and the challenging pose scansin separate experiments The performance of the proposedidentification approach is given in Table 3

433 Experiments on UMB-DB Database For evaluation ofthe proposed d-MVAHF identification algorithm we employthe experimental protocol of the study [27] to create the N vsN experiments using d-MVWF d-MVLHF d-MVRHF andd-MVAHF images where the gallery set is comprised of oneneutral scan per subject (142 scans) and the probe set containsall remaining neutral scans (299 scans) The performance ofour proposed methodology is given in Table 3

434 Experiments on FRGC v20 Database

(1) For evaluation of face identification algorithm exper-imental protocol of the study [41] is employed forN vs N experiments using d-MVWF d-MVLHF d-MVRHF and d-MVAHF images from FRGC v20database which contains 2469 neutral images [41] Inthese experiments probe set is created using 2003neutral images whereas first neutral image of each ofthe 466 subjects is enrolled in the gallery

(2) Face verification algorithm was investigated by cre-ating N vs N experiments using the d-MVWF d-MVLHF d-MVRHF and d-MVAHF images TheFRGC v20 database comprises 370 such subjects thathave at least two neutral images [45] Therefore twoimages per subject (740 images) are included in thegallery to calculate SVM training scores In case of thesubjects that have more than two neutral images thefirst two of the stored neutral images are contained inthe gallery All the remaining neutral face images areused as probe set The performance of the proposedidentification and verification algorithms is givenby cumulative match characteristic (CMC) curves

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 16: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

16 Mathematical Problems in Engineering

Iden

tifica

tion

Rate

Rank

100

99

98

97

96

95

94

932 4 6 8 10 12 14

d-MVAHF (w) d-MVWF (w)d-MVLHF (w)d-MVRHF (w)d-MVAHF (u) d-MVWF (u)d-MVLHF (u)d-MVRHF (u)

(a)

10minus3

10minus2

10minus1

100

False Accept Rate

Verifi

catio

n Ra

te

100

99

98

97

96

95

94

93

(b)

Figure 13 (a) CMC (b) ROC curves showing weighted (w) and unweighted (u) face identification and verification rates for FRGC v20database respectively

in Figure 13(a) and receiver operating characteristic(ROC) curves in Figure 13(b)

44 Computational Complexity Analysis Computationalcomplexity analysis of the proposed algorithm is given interms of Big-119874 notation as follows

(1) The computational complexity of the proposed PCFalignment algorithm is of the order of119874(119898) where119898represents total number of facial depth points in thepoint cloud

(2) For d-MVAHF-based face identification the totaltime complexity of AlexNet is calculated in terms ofall of its convolutional layers as 119874(sum119899119895=1 119910119895minus111990921198951199101198951199112119895 )Here 119899 represents the number of convolutional layers119910119895minus1 is the number of input channels of the 119895119905ℎ layer119910119895 is the number of filters of the 119895119905ℎ layer 119909119895 is thespatial size of the filters and 119911119895 denotes the size of theoutput feature map

(3) For the d-MVAHF-SVM-based face verificationsetup the computational complexity involves com-plexity of the AlexNet mentioned above along withcomplexity of the SVM classifier which is of theorder of 119874 log(119899) The computational complexityanalysis shows that the feature extraction stage usingAlexNet is computationally the most demanding andexpensive stage of the proposed face identificationand verification algorithms

(4) The experiments were performed on a P4 computerwith an Intel core i7 18Ghz CPU and 8GB of RAMThe computational complexity in terms of computa-tion time is shown in Table 4 The time computedafter feature extraction by the Alexnet with its ownclassifier in face identification is higher comparedto using SVM classifier in classification phase for

face verification This is because Alexnet classifiergenerates the complex decision boundaries in thefeature space for classification On the other handSVM only takes into account the global matchingscores resulting into lower computation time

45 Comparison with Existing Algorithms The performanceof the proposed approach is compared with the existing state-of-the-art earlier studies in the following

GavabDB Referring to Table 5 the study [26] proposed aRiemannian framework based face recognition approach toanalyze facial shapes using radial curves emanating from thenose tip The study [28] reported face recognition resultsemploying multiscale extended Local Binary Pattern descrip-tors and a hybrid matching method using local features Thestudy [44] proposed a face recognition approach using 3Dkeypoint extraction and sparse comparison based similar-ity evaluation The algorithm proposed in the study [46]encoded different types of facial features and modalities intoa compact representation using covariance based descriptorswhere face recognition was performed using a geodesicdistance based approach The study [47] presented a 3Dface keypoint detection and matching approach based onprinciple curvatures In this study matching was performedusing local shape descriptors sparse representation basedreconstruction method and score level fusion The approachproposed in Ref [59] employed 3D binary ridge images alongwith principal maximum curvature and ICP based matchingThe study [60] proposed a sparse representation basedframework for face recognition using low level geometricfeatures

Bosphorus The approach presented in the study [27]reported face recognition accuracies employing facial

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 17: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 17

Table 4 Time complexity of the proposed approach in seconds

Preprocessing MVAHF synthesis Feature extraction Classification TotalFace recognition Face verification Face recognition Face verification

0451 0089 1024 0029 0021 1593 1585

Table 5 Recognition accuracies comparison for the proposed and existing approaches using GavabDB Bosphorus and UMB-DB databases

AlgorithmsGavabDB Bosphorus UMB-

DBRank-1 Identification rates Verification

ratesRank-1 Identification rates

FFRotatedlookingup

Rotatedlookingdown

LPF RPF FF YR1 lt 90∘ YR =90∘ Overall FF

Existing

100[44]

984[44]

967[44]

934[44]

819[44]

823[59]

100[27]

816[61]

457[61]

886[61]

987[27]

100[46]

984[46]

992[46]

869[26]

705[26]

951[60]

100[62]

841[62]

471[62]

911[62]

98[39]

100[47]

100[47]

984[47]

934[28]

787[28] - - 948

[63]571[47]

928[47] -

Proposedd-MVLHF 984 967 967 9512 - 967 981 945 8712 949 972d-MVRHF 967 984 951 - 8362 984 99 941 944 979d-MVWFd-MVAHF 100 100 984 951 836 100 100 954 - 96 993

1 YR is yaw rotation (along y-axis in xz plane)2 LPF RPF and face images at YR = 90∘ turn into LHF and RHF respectively after alignment

depth information and ICP algorithm and the study [47]is mentioned in above paragraph The face recognitionmethodology given in the paper [61] extracted localdescriptors to perform matching according to differentialsurface measurements The study [62] employed surfacedifferential measurement based keypoint descriptors toperform face recognition using multitask sparse representa-tion based fine-grained matching algorithm The study [63]proposed to fit 3D deformable model to unseen PFIs for facerecognition

UMB-DB The study [27] is discussed in above paragraphwhereas the recognition accuracies reported in the paper [39]are based on an approach employing PCA

FRGC v20 Referring to Table 6 the study [17] is focused onusing DT-CWT and LDA based face recognition approachThe study [41] proposed to employ isogeodesic stripes and3D weighted walkthrough (3DWW) descriptors in the facerecognition process Themethodology proposed in the study[42] integrated global and local geometric cues for face recog-nition employing Euclidean distance based classifier Finallythe study [43] proposed a local features based resolutioninvariant approach to classify scale space extrema using SVMclassifier whereas the studies [47 62 63] are discussed withapproaches presented in Table 5 The proposed d-MVAHF-based 3D face recognition approach has yielded better resultsthan the existing state-of-the-art studies given in Tables 5 and6

5 Discussion

The proposed study covers the problem of 3D face alignmentand face recognition with applications in identification andverification scenarios The former employs PCF approachwhile the latter is based on d-MVAHF images The perfor-mance of these two algorithms is discussed separately

51 PCF Alignment Algorithm

(1) The proposed PCF alignment algorithm achieved9982 and 9995 alignment accuracy on GavabDBand FRGC v20 databases respectively Similarly anaccuracy rate of 100 was obtained on nonoccludedsubsets of Bosphorus and UMB-DB databases eachThe nose tip was not detectable for one subject inGavabDB database and two subjects in FRGC v20database else the accuracy of the proposed align-ment algorithm would have been 100 for each ofthese databases The excellent level of accuracies isattributed to the fine alignment performed at a stepsize of 01∘

(2) The proposed alignment algorithm is very effectivefor face recognition applications because it rotates thenose tip in correct direction to save computationalcost This rotation in correct direction is because ofpose learning aspect of the proposed approach egpose learning of a LOFI or LUFI correctly dictates the

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 18: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

18 Mathematical Problems in Engineering

Table 6 Recognition accuracies comparison for the proposed and existing approaches using FRGC v20 database

Existing algorithms Proposed algorithm[17] [41] [42] [43] [47] [62] [63] d-MVLHF d-MVRHF d-MVWFd-MVAHF

Face identification 987 961 938 98 996 987 998 979 968 998Face verification 995 977 954 983 - 976 964 996

algorithm to rotate the nose tip to the right side ordownwards for alignment

(3) The proposed PCF alignment algorithm is compu-tationally very efficient Referring to Section 313 itfirst aligns the nose tip only employing 35 (3+11+21)rotations in each of xz and yz planes Then wholeface image is aligned in a single 3D rotation in eachplane (instead of 35 rotations) using the knowledgelearned from the nose tip alignment Please notethat aligning the whole face instead of nose tip onlyat the cost of 35 rotations is computationally veryexpensive For example a 3D face image composedof 03 million depth points requires 03 times 35 = 105million rotations The computational efficiency isattributed to alignment of nose tip prior to the wholeface image

52 d-MVAHF-Based 3D Face Recognition

(1) The proposed d-MVAHF-based 3D face recognitionapproach obtained rank-1 identification rates of 100100 984 951 and 836 for FF rotated lookingup rotated looking down LPF and RPF subsetsof GavabDB database respectively Using Bosphorusdatabase rank-1 identification rates of 100 954871 and 96 were obtained for FF YR lt 90∘YR = 90∘ and overall experiments Similarly rank-1 identification rate of 993 was obtained for FFexperiment on UMB-DB database whereas rank-1identification rate of 998was achieved using FRGCv20 databaseThe proposed d-MVAHF-SVM-based face verifica-tion approach achieved a verification rate of 100and 9957 at 01 FAR for FF experiments usingGavabDB and FRGC v20 databases respectivelyThe improved identification and verification ratesof the proposed study compared to the studies [1726ndash28 39 41ndash44 46 47 61ndash63] and [17 41ndash4359 60] respectively are attributed to d-MVAHF-based approach whereas the mentioned studies nei-ther used deep learning nor employed multiviewapproach

(2) Using d-MVAHF images recognition accuraciesequivalent to that of d-MVWF images were achievedat a reduced computational cost of 71 This isbecause d-MVWF-based approach employed sevensynthesized whole face images of a subject orientedat 0∘ plusmn10∘ plusmn20∘ and plusmn30∘ On the other hand d-MVAHF-based approach integrated 3D facial infor-mation of seven MVWF images into four MVAHF

images oriented at 0∘ 10∘ 20∘ and 30∘ which isequivalent to using two whole face images

(3) Comparative evaluation was also performed employ-ing d-MVLHF and d-MVRHF based face identifi-cation and verification approaches For d-MVLHFbased approach the identification accuracies of FFrotated looking up and rotated looking down experi-ments and verification accuracies were decreased by163 341 176 and 341 respectively usingGavabDB database For d-MVRHF based approachthe mentioned accuracies were decreased by 341163 347 and 163 respectively For FF YR lt90∘ and overall experiments of Bosphorus databasethe d-MVLHF and d-MVRHF based identificationaccuracies were decreased by 194 095 and 116and 101 138 and 169 respectively Similarlythe d-MVLHF and d-MVRHF based identificationaccuracies on UMB-DB database were decreased by216 and 143 respectively for FF experimentFor the same experiment on FRGC v20 databasethe d-MVLHF and d-MVRHF based identificationrates were reduced by 194 and 31 whereas theverification rates were reduced by 205 and 332respectively The reduction in recognition accuraciesis because of noise or motion artifacts introduced atthe time of face image acquisition

(4) Weight assignment strategy enhanced unweightedrank-1 identification rates by 356 324 345and 341 in the experiments performed onGavabDB Bosphorus UMB-DB and FRGC v20databases respectively This enhancement is becauseof assigning more weights to better performingMVAHF images (please see equation (5))

(5) Experimental results suggest that integration of theknowledge learned from MVWF images into d-MVAHF images boosts the face recognition accu-racies This is attributed to the fact that multiviewface images provide more facial feature informationfor classification than the case of single view facialfeatures

(6) Experimental results of the PCF alignment and d-MVAHF-based 3D face recognition algorithms arecomparable in all four employed databases Thesedatabases contain several types of variations suchas gender pose age noise and resolution varia-tions (Section 41) This indicates that the proposedmethodology is capable of aligning and classifyingsubjects captured with several mentioned variations

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 19: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 19

(7) The performance of face recognition degrades signif-icantly when the input images are of low resolutionsuch as images captured by surveillance cameras orfrom a large distance [64] This is because of unavail-ability of the discriminating information present inthe high resolution face images On the other handface recognition accuracies improve with the increas-ing resolution of PFIs [65] There are two standardapproaches to handle this problem (1) downsamplingapproach where the resolution of gallery imagesis downsampled to the resolution of PFIs and (2)super resolution approach where the low resolutionof PFIs is improved into higher resolution images[64]Theproposed d-MVAHF-based approach can beemployed to recognize low resolution depth imagesReferring to Tables 5 and 6 as the proposed approachoutperforms existing approaches using high resolu-tion PFIs it would perform better than the existingapproaches to handle low resolution PFIs This isbecause the initial layers of dCNNs can effectivelylearn low level features encountered in low resolutionimages (for example lines dots etc) In contrast thelater layers tend to learn high level features like shapesand objects based on low level features

6 Conclusions

In this paper a novel approach based on deeply learnedpose invariant image analysis with applications in 3D facerecognition is presented The PCF alignment algorithmemployed the following (i) pose learning approach usingnose tip heuristic to estimate acquisition pose of the face(ii) L2 norm minimization based coarse to fine approachfor nose tip alignment and (iii) a transformation step toalign the whole face image incorporating the knowledgelearned from nose tip alignment Face recognition algorithmwas implemented in both of identification and verificationsetups The dCNN based face identification algorithm wasimplemented using d-MVAHF images whereas the verifica-tion algorithm was employed using d-MVAHF-SVM-basedmethodology The experimental performance was evaluatedusing four benchmark 3D face databases namely GavabDBBosphorus UMB-DB and FRGC v20

In conclusion it was observed that (i) the proposedPCF alignment algorithm is capable of correctly aligningthe frontal and profile face images (ii) its pose learningaspect is very effective to find correct direction of rotationfor facial alignment (iii) it is computationally very efficientdue to alignment of the nose tip first (iv) LHF and RHFbased intrinsic facial symmetry is a promising measure toevaluate d-MVAHF-based face recognition (v) d-MVAHFimages and d-MVWF images produced similar recogni-tion accuracies (vi) MVLHF images and MVRHF imagesyielded relatively decreased recognition rates compared toMVAHF images (vii) weight assignment strategy signifi-cantly enhanced the recognition rates (viii) deeply learnedfacial features possess more discriminative power comparedto handcrafted features (ix) experimental results show that

the real 3D facial feature information integrated in the d-MVAHF images significantly enhanced the face recognitionaccuracies (x) the proposed PCF alignment and d-MVAHF-based face recognition is computationally efficient comparedto d-MVWF image based face recognition and (xi) thefrontal and profile face recognition accuracies produced bythe proposed methodology are better than existing state-of-the-art methods and are comparable in all databases for bothof identification and verification experiments

As a future direction we plan to (i) develop 3D facealignment algorithm using deep learning based approachand (ii) reduce the number of synthesized multiview faceimages such that the computational complexity of the systemis further reduced and overall system performance can beenhanced

Data Availability

Previously reported face image datasets including theGavabDB Bosphorus UMB-DB and FRGC v20 havebeen used to support this study The datasets are availableupon request from the sponsors The related datasets arepublicly available at the following links (1) GavabDBhttparchiveis2K19W (2) Bosphorus httpbosphoruseebounedutr Homeaspx (3)UMB-DB httpwwwivldiscounimibitminisitesumbdbrequesthtml and (4) FRGCv20httpscvrlndeduprojectsdataface-recognition-grand-challenge-frgc-v20-data-collection

Conflicts of Interest

The authors declare no conflicts of interest

Authorsrsquo Contributions

Naeem Ratyal Muhammad Sajid Anzar Mahmood andSohail Razzaq conceived the idea and contributed in theexperimentation process andwriting ofmanuscript includingtables and figures Imtiaz Ahmad Taj Saadat Hanif DarNouman Ali Muhammad Usman Mirza Jabbar Aziz Baigand UsmanMussadiq took part in organizing the manuscriptand conducting experiments to compute time complexityAll authors contributed to the final preparation of themanuscript

Acknowledgments

The authors are thankful to the organizers of GavabDBBosphorus UMB-DB and FRGC for provision of thedatabases for research purposes

References

[1] M Sajid N Iqbal Ratyal N Ali et al ldquoThe impact of asym-metric left and asymmetric right face images on accurate ageestimationrdquo Mathematical Problems in Engineering vol 2019Article ID 8041413 10 pages 2019

[2] M Bessaoudi M Belahcene A Ouamane A Chouchaneand S Bourennane ldquoMultilinear Enhanced FisherDiscriminant

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 20: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

20 Mathematical Problems in Engineering

Analysis for robust multimodal 2D and 3D face verificationrdquoApplied Intelligence vol 49 no 4 pp 1339ndash1354 2019

[3] E Basaran M Gokmen and M E Kamasak ldquoAn efficientmultiscale scheme using local Zernike moments for face recog-nitionrdquo Applied Sciences (Switzerland) vol 8 no 5 article no827 2018

[4] S Z Gilani and A Mian ldquoLearning from millions of 3Dscans for large-scale 3D face recognitionrdquo in Proceedings of the2018 IEEECVF Conference on Computer Vision and PatternRecognition (CVPR) pp 1896ndash1905 Salt Lake City UT USAJune 2018

[5] A Irtaza S M Adnan K T Ahmed et al ldquoAn ensemble basedevolutionary approach to the class imbalance problem withapplications in CBIRrdquo Applied Sciences (Switzerland) vol 8 no4 artilce no 495 2018

[6] N Dagnes E Vezzetti F Marcolin and S Tornincasa ldquoOcclu-sion detection and restoration techniques for 3D face recogni-tion a literature reviewrdquoMachine Vision and Applications vol29 no 5 pp 789ndash813 2018

[7] S Ramalingam ldquoFuzzy interval-valued multi criteria baseddecision making for ranking features in multi-modal 3D facerecognitionrdquo Fuzzy Sets and Systems vol 337 pp 25ndash51 2018

[8] M Sajid N Ali S H Dar et al ldquoData augmentation-assistedmakeup-invariant face recognitionrdquo Mathematical Problems inEngineering vol 2018 Article ID 2850632 10 pages 2018

[9] J Kittler P Koppen P Kopp P Huber and M RatschldquoConformal mapping of a 3d face representation onto a 2Dimage for CNN based face recognitionrdquo in Proceedings of the11th IAPR International Conference on Biometrics ICB 2018 pp124ndash131 Australia February 2018

[10] M Bessaoudi M Belahcene A Ouamane and S BourennaneldquoA novel approach based on high order tensor and multi-scalelocals features for 3D face recognitionrdquo in Proceedings of the 4thInternational Conference on Advanced Technologies for Signaland Image Processing ATSIP 2018 pp 1ndash5 Tunisia March 2018

[11] F Liu R Zhu D Zeng Q Zhao and X Liu ldquoDisentanglingFeatures in 3D Face Shapes for Joint Face Reconstruction andRecognitionrdquo in Proceedings of the 2018 IEEECVF Conferenceon Computer Vision and Pattern Recognition (CVPR) pp 5216ndash5225 Salt Lake City UT USA June 2018

[12] A T Tran T Hassner IMasi E Paz Y Nirkin andGMedionildquoExtreme 3D face reconstruction seeing through occlusionsrdquoin Proceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 3935ndash3944 SaltLake City UT USA June 2018

[13] N Pears Y Liu and P Bunting 3D Imaging Analysis andApplications vol 3 Springer Berlin Germany 2012

[14] NWerghi C Tortorici S Berretti andADel Bimbo ldquoBoosting3D LBP-Based face recognition by fusing shape and texturedescriptors on the meshrdquo IEEE Transactions on InformationForensics and Security vol 11 no 5 pp 964ndash979 2016

[15] L Spreeuwers ldquoFast and accurate 3D face recognition Usingregistration to an intrinsic coordinate system and fusion ofmultiple region classifiersrdquo International Journal of ComputerVision vol 93 no 3 pp 389ndash414 2011

[16] K W Bowyer K Chang and P Flynn ldquoA survey of approachesand challenges in 3D and multi-modal 3D + 2D face recogni-tionrdquo Computer Vision and Image Understanding vol 101 no 1pp 1ndash15 2006

[17] X Wang Q Ruan Y Jin and G An ldquoThree-dimensional facerecognition under expression variationrdquo Eurasip Journal onImage and Video Processing vol 2014 no 51 2014

[18] S Elaiwat M Bennamoun F Boussaid and A El-Sallam ldquo3-D face recognition using curvelet local featuresrdquo IEEE SignalProcessing Letters vol 21 no 2 pp 172ndash175 2014

[19] L Zhang Z Ding H Li Y Shen and J Lu ldquo3D facerecognition based on multiple keypoint descriptors and sparserepresentationrdquo PLoS ONE vol 9 no 6 Article ID e100120 pp1ndash9 2014

[20] S Soltanpour B Boufama and Q M J Wu ldquoA survey of localfeature methods for 3D face recognitionrdquo Pattern Recognitionvol 72 pp 391ndash406 2017

[21] A Ouamane A Chouchane E Boutellaa M Belahcene SBourennane and A Hadid ldquoEfficient tensor-based 2D+3D faceverificationrdquo IEEE Transactions on Information Forensics andSecurity vol 12 no 11 pp 2751ndash2762 2017

[22] K I Chang K W Bowyer and P J Flynn ldquoAn evaluationof multimodal 2D+3D face biometricsrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 27 no 4 pp 619ndash624 2005

[23] C BenAbdelkader and P A Griffin ldquoComparing and combin-ing depth and texture cues for face recognitionrdquo Image andVision Computing vol 23 no 3 pp 339ndash352 2005

[24] C Hesher A Srivastava and G Erlebacher ldquoA novel techniquefor face recognition using range imagingrdquo in Proceedings ofthe 7th International Symposium on Signal Processing and ItsApplications ISSPA 2003 vol 2 pp 201ndash204 France July 2003

[25] D Smeets J Keustermans D Vandermeulen and P SuetensldquoMeshSIFT local surface features for 3D face recognition underexpression variations and partial datardquo Computer Vision andImage Understanding vol 117 no 2 pp 158ndash169 2013

[26] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[27] N Alyuz B Gokberk and L Akarun ldquo3-D face recognitionunder occlusion using masked projectionrdquo IEEE Transactionson Information Forensics and Security vol 8 no 5 pp 789ndash8022013

[28] D Huang M Ardabilian Y Wang and L Chen ldquo3-D facerecognition using eLBP-based facial description and localfeature hybrid matchingrdquo IEEE Transactions on InformationForensics and Security vol 7 no 5 pp 1551ndash1565 2012

[29] N Alyuz B Gokberk and L Akarun ldquoRegional registration forexpression resistant 3-D face recognitionrdquo IEEETransactions onInformation Forensics and Security vol 5 no 3 pp 425ndash4402010

[30] P J Besl and N D McKay ldquoA method for registration of 3-D shapesrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 14 no 2 pp 239ndash256 1992

[31] T Papatheodorou and D Rueckert 3D Face Recognition I-TechEducation and Publishing Vienna Austria 2007

[32] C C Queirolo L Silva O R P Bellon and M PamplonaSegundo ldquo3D face recognition using simulated annealing andthe surface interpenetration measurerdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 32 no 2 pp 206ndash219 2010

[33] C C Queirolo L Silva O R P Bellon andM P Segundo ldquo3Dface recognition using the surface interpenetration measure acomparative evaluation on the FRGC databaserdquo in Proceedingsof the 2008 19th International Conference on Pattern RecognitionICPR 2008 USA December 2008

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 21: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Mathematical Problems in Engineering 21

[34] Y Wang J Liu and X Tang ldquoRobust 3D face recognition bylocal shape difference boostingrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 32 no 10 pp 1858ndash18702010

[35] K Cao Y Rong C Li X Tang and C C Loy ldquoPose-robustface recognition via deep residual equivariant mappingrdquo inProceedings of the 2018 IEEECVF Conference on ComputerVision and Pattern Recognition (CVPR) pp 5187ndash5196 Salt LakeCity UT USA June 2018

[36] A BMoreno andA Sanchez ldquoGavabDB a 3D face databaserdquo inProceedings of the Second COSTWorkshop on Biometrics on theInternet Fundamentals Advances and Applications pp 77ndash822004

[37] M Lewis ldquoFactors affecting the perception of 3D facial symme-try from 2D projectionsrdquo Symmetry vol 9 no 10 p 243 2017

[38] A Savran N Alyuz H Dibeklioglu et al ldquoBosphorus databasefor 3D face analysisrdquo in Biometrics and Identity Managementvol 5372 of Lecture Notes in Computer Science pp 47ndash56Springer Berlin Heidelberg Berlin Germany 2008

[39] A Colombo C Cusano andR Schettini ldquoUMB-DB a databaseof partially occluded 3D facesrdquo in Proceedings of the 2011 IEEEInternational Conference on Computer Vision Workshops ICCVWorkshops 2011 pp 2113ndash2119 Spain November 2011

[40] P J Phillips P J Flynn T Scruggs et al ldquoOverview of the facerecognition grand challengerdquo in Proceedings of the 2005 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2005 pp 947ndash954 USA June 2005

[41] S Berretti ADel Bimbo andP Pala ldquo3D face recognition usingisogeodesic stripesrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 32 no 12 pp 2162ndash2177 2010

[42] F R Al-Osaimi M Bennamoun and A Mian ldquoIntegrationof local and global geometrical cues for 3D face recognitionrdquoPattern Recognition vol 41 no 3 pp 1030ndash1040 2008

[43] G Zhang and Y Wang ldquoRobust 3D face recognition based onresolution invariant featuresrdquo Pattern Recognition Letters vol32 no 7 pp 1009ndash1019 2011

[44] S Berretti A Del Bimbo and P Pala ldquoSparse matching ofsalient facial curves for recognition of 3-D faces with missingpartsrdquo IEEE Transactions on Information Forensics and Securityvol 8 no 2 pp 374ndash389 2013

[45] M H Mahoor and M Abdel-Mottaleb ldquoFace recognitionbased on 3D ridge images obtained from range datardquo PatternRecognition vol 42 no 3 pp 445ndash451 2009

[46] W Hariri H Tabia N Farah A Benouareth and D Declercqldquo3D face recognition using covariance based descriptorsrdquo Pat-tern Recognition Letters vol 78 pp 1ndash7 2016

[47] Y Tang H Li X Sun J-M Morvan and L Chen ldquoPrincipalcurvature measures estimation and application to 3D facerecognitionrdquo Journal of Mathematical Imaging and Vision vol59 no 2 pp 211ndash233 2017

[48] A F Abate M Nappi D Riccio and G Sabatino ldquo2D and 3Dface recognition a surveyrdquo Pattern Recognition Letters vol 28no 14 pp 1885ndash1906 2007

[49] V Blanz and T Vetter ldquoFace recognition based on fitting a 3Dmorphable modelrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 25 no 9 pp 1063ndash1074 2003

[50] T Russ C Boehnen and T Peters ldquo3D face recognitionusing 3D alignment for PCArdquo in Proceedings of the 2006 IEEEComputer Society Conference on Computer Vision and PatternRecognition CVPR 2006 pp 1391ndash1398 USA June 2006

[51] X Lu andAK Jain ldquoAutomatic feature extraction formultiview3D face recognitionrdquo in Proceedings of the FGR 2006 7th Inter-national Conference on Automatic Face andGesture Recognitionpp 585ndash590 UK April 2006

[52] S Zafeiriou G A Atkinson M F Hansen et al ldquoFace recog-nition and verification using photometric stereoThe photofacedatabase and a comprehensive evaluationrdquo IEEE Transactionson Information Forensics and Security vol 8 no 1 pp 121ndash1352013

[53] S Jahanbin R Jahanbin and A C Bovik ldquoPassive threedimensional face recognition using Iso-geodesic contours andprocrustes analysisrdquo International Journal of Computer Visionvol 105 no 1 pp 87ndash108 2013

[54] P Kamencay R Hudec M Benco and M Zachariasova ldquo2D-3D face recognition method based on a modified CCA-PCAalgorithmrdquo International Journal of Advanced Robotic Systemsvol 11 no 36 pp 1ndash8 2014

[55] X Peng M Bennamoun and A S Mian ldquoA training-freenose tip detection method from face range imagesrdquo PatternRecognition vol 44 no 3 pp 544ndash558 2011

[56] A Krizhevsky I Sutskever andG EHinton ldquoImagenet classifi-cationwith deep convolutional neural networksrdquo in Proceedingsof the 26th Annual Conference on Neural Information ProcessingSystems (NIPS rsquo12) pp 1097ndash1105 Lake Tahoe Nev USADecember 2012

[57] U I Bajwa I A TajMWAnwar andXWang ldquoAmultifacetedindependent performance analysis of facial subspace recogni-tion algorithmsrdquo PLoS ONE vol 8 no 2 Article ID e565102013

[58] A S Mian M Bennamoun and R Owens ldquoAn efficient multi-modal 2D-3D hybrid approach to automatic face recognitionrdquoIEEE Transactions on Pattern Analysis andMachine Intelligencevol 29 no 11 pp 1927ndash1943 2007

[59] M H Mahoor A multi-modal approach for face modeling andrecognition [PhD dissertation] 2008 PhD dissertation

[60] X Li T Jia and H Zhang ldquoExpression-insensitive 3D facerecognition using sparse representationrdquo in Proceedings of theIEEE Computer Society Conference on Computer Vision andPattern Recognition pp 2575ndash2582 2009

[61] S Berretti N Werghi A Del Bimbo and P Pala ldquoMatching 3Dface scans using interest points and local histogramdescriptorsrdquoComputers and Graphics vol 37 no 5 pp 509ndash525 2013

[62] H Li D Huang J-MMorvan YWang and L Chen ldquoTowards3D face recognition in the real a registration-free approachusing fine-grainedmatching of 3D Keypoint descriptorsrdquo Inter-national Journal of Computer Vision vol 113 no 2 pp 128ndash1422015

[63] S Z Gilani A Mian and P Eastwood ldquoDeep dense andaccurate 3D face correspondence for generating populationspecific deformable modelsrdquo Pattern Recognition vol 69 pp238ndash250 2017

[64] S Biswas KW Bowyer andP J Flynn ldquoMultidimensional scal-ing formatching low-resolution face imagesrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 34 no 10 pp2019ndash2030 2012

[65] M Jian and K-M Lam ldquoSimultaneous hallucination andrecognition of low-resolution faces based on singular valuedecompositionrdquo IEEE Transactions on Circuits and Systems forVideo Technology vol 25 no 11 pp 1761ndash1772 2015

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 22: Deeply Learned Pose Invariant Image Analysis with ...downloads.hindawi.com/journals/mpe/2019/3547416.pdf · MathematicalProblemsinEngineering xy ane xz ane yz ane Pr-rocessing Prob

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom


Recommended