1
Vijayakumar Bhagavatula
Prof. Vijayakumar [email protected]
http://www.ece.cmu.edu/~kumar/
Face & Iris Recognition Research
mailto:[email protected]
2
Vijayakumar Bhagavatula
Acknowledgments Dr. Marios SavvidesDr. Chunyan XieDr. Jason ThorntonDr. Krithika VenkataramaniDr. Pablo HenningsNaresh Boddeti
Technology Support Working Group (TSWG)CyLab
3
Vijayakumar Bhagavatula
Outline
Use of spatial frequency domain for biometric image recognition --- Correlation filtersFace recognition
Face recognition grand challenge (FRGC)Simultaneous Super-Resolution & Recognition (S2R2)
Iris recognitionIris challenge evaluation (ICE) 2005Extended depth-of-focus iris recognition
Summary
4
Vijayakumar Bhagavatula
Terminology
Verification (1:1 matching)Am I who I say I am?Example applications: Trusted Traveler Card, ATM access, Grocery store access, Benefits access
Identification (1:N matching)Does this face match to one of those in a database?Example applications: Watch list, identifying suspects in surveillance video
Recognition = Verification + Identification
5
Vijayakumar Bhagavatula
Challenge: Pattern VariabilityChallenge: To tolerate inter-class pattern variability (some times called distortions) while maintaining intra-class discriminationFacial appearance changes due to illumination changes, expressions, pose variations etc.Fingerprints affected by rotations, elastic deformations, moisture, etc.Iris images affected by eyelid occlusions, eyelashes, off-axis gaze, mis-focus, etc.
6
Vijayakumar Bhagavatula
Biometric Recognition Approaches
Statistical pattern recognition (e.g., Minimum error rate methods)Nonparametric methods (e.g., Nearest-neighbor methods)Discriminant methods (e.g., linear discriminant functions, artificial neural networks, support vector machines, etc.)Correlation filters based on 2D Fourier transforms of biometric images
7
Vijayakumar Bhagavatula
Correlation Pattern Recognition
Determine the cross-correlation between the reference and test images for all possible shifts.
When the test image is authentic, correlation output exhibits a peaks at that shift.
If the test image is of an impostor, the correlation output will be low .
Simple matched filters wont work well in practice, due to rotations, scale changes and other differences between test and reference images.
Advanced distortion-tolerant correlation filters developed previously for automatic target recognition (ATR) applications, now being adapted for biometric recognition.
( ) ( ) ( ), , ,x y x yc r x y s x y dxdy =
Ref: B.V.K. Vijaya Kumar, A. Mahalanobis and R. Juday, Correlation Pattern Recognition, Cambridge University Press, UK, November 2005.
8
Vijayakumar Bhagavatula
Correlation Plane Contour Map Correlation Plane Contour Map
Correlation Plane SurfaceCorrelation Plane Surface
M1A1 in the open M1A1 near tree line
SAIP ATR SDF Correlation Performance for Extended Operating
Conditions
Courtesy: Northrop Grumman
Adjacent trees cause some correlation noise
9
Vijayakumar Bhagavatula
Controlled Response to Rotations
( )c
=>
1 for 0 for
4545
0
0
Ref: B.V.K. Vijaya Kumar, A. Mahalanobis and A. Takessian, Optimal tradeoff circular harmonic function (OTCHF) correlation filter methods providing controlled in-plane rotation response," IEEE Trans. Image Processing, vol. 9, 1025-1034, 2000.
10
Vijayakumar Bhagavatula
Correlation Filters
Match
No Match
DecisionTest Image IFFT
Analyze
Correlation output
FFT
Correlation Filter
Filter Design . . .Training Images
TrainingRecognition
Ref: B.V.K. Vijaya Kumar, M. Savvides, K. Venkataramani and C. Xie, Proc. ICIP, I.53-I.56,2002.
Match qualityQuantified byPeak-to-SidelobeRatio (PSR)
11
Vijayakumar Bhagavatula
Peak to Sidelobe Ratio (PSR)
meanPeakPSR =
1. Locate peak
2. Mask a small pixel region
3. Compute the mean and in a bigger region centered at the peak
PSR invariant to constant illumination changes
Match declared when PSR is large, i.e., peak must not only be large, but sidelobes must be small.
12
Vijayakumar Bhagavatula
CMU PIE Database
13 cameras21 Flashes
Ref: T. Sim, S. Baker, and M. Bsat, The CMU pose, illumination, and expression (PIE) database, Proc. of the 5th IEEE Intl. Conf. on Automatic Face and Gesture Recognition, May 2002.
13
Vijayakumar Bhagavatula
CMU PIE Database, One face under 21 illuminations65 subjects
14
Vijayakumar Bhagavatula
Train on 3, 7, 16, -> Test on 10.
Match Quality = 40.95
Ref: Marios Savvides, Reduced-Complexity Face Recognition using Advanced Correlation Filters and Fourier subspace Methods for Biometric Applications, Ph.D. Dissertation, CMU, April 2004
15
Vijayakumar Bhagavatula
Using the same filter as before, Match Quality = 30.60
Occlusion of Eyes
16
Vijayakumar Bhagavatula
Match Quality = 22.38
Un-centered Images
17
Vijayakumar Bhagavatula
ImpostorUsing someone elses filterPSR = 4.77
18
Vijayakumar Bhagavatula
Face Recognition Grand Challenge (FRGC)
Ver 2.0625 Subjects; 50,000 Recordings; 70 Gbytes
To facilitate the advancement of face recognition research, FRGChas been organized by NIST
Ref: P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, "Overview of theFace Recognition Grand Challenge," In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005
19
Vijayakumar Bhagavatula
FRGC Dataset: Experiment 4
Generic Training Set consisting of 222 people with a total of 12,776 images
Gallery Set of 466 people (16,028) images total
Feature extraction Feature space generation
Reduced Dimensionality Feature Representation of Gallery Set
16,028
Probe Set of 466 people (8,014) images total
Reduced Dimensionality Feature Representation of Probe Set
8,014
Similarity Matching
Reduced Dimensional Feature Space
project project
20
Vijayakumar Bhagavatula
FRGC Gallery Images
Controlled (Indoor)
16,028 gallery images of 466 people
21
Vijayakumar Bhagavatula
FRGC Probe Images
Uncontrolled (Indoor)
22
Vijayakumar Bhagavatula
FRGC Probe Images
Uncontrolled (Outdoor)
Outdoor illumination images are very
challenging due to harsh cast shadows
23
Vijayakumar Bhagavatula
FRGC Baseline Results
The verification rate of PCA is about 12% at False Accept Rate 0.1%.
ROC curve from P. Jonathan Phillips et al (CVPR 2005)
24
Vijayakumar Bhagavatula
Correlation Filters for Face Verification
Enrollment
Similarity Score
Verification
Only 1
Low performance
256 million correlations
Long time
25
Vijayakumar Bhagavatula
Class-dependence Feature Analysis (CFA)
MotivationImprove the recognition rate by using the generic training setReduce the processing time by extracting features using inner products
26
Vijayakumar Bhagavatula
Class Dependent Feature Analysis (CFA)
ymace-2
T hy
Class1
.
-mace-2h
Class2 Class 222
y
T0] 0 0 0[u1 =T1111][u2 = T0] 0 0 0[u222 =
Below, we show the building of the correlation filter for class 2
27
Vijayakumar Bhagavatula
Performance on FRGC Expt. 4
0
0 . 2
0 . 4
0 . 6
0 . 8
1
Ex p 4
Verif
icat
ion
Rat
es
P CAGS LDA
CFAKCFA- v 1KCFA- v 3
KCFA- v 5
82.4% @ 0.1 % FAR
(Latest Performance)
PCA: Principal Components Analysis GSLDA: Gram-Schmit Linear Discriminant AnalysisCFA: Class-dependence Feature Analysis KCFA: Kernel Class-dependence Feature Analysis
Ref: B.V.K. Vijaya Kumar, M. Savvides and C. Xie, Correlation Pattern Recognition for Face recognition, Proc. IEEE, vol. 94, 1963-1976, Nov. 2006.
28
Vijayakumar Bhagavatula
Low-Resolution Face RecognitionIn many surveillance scenarios people may be far from the camera and their faces may be small.
Looking for suspects involves parsing through hundreds of hours of video.A terrorist crime was solved in Italy in 2002 by analysis of 52,000 hours of surveillance videos installed in rail stations.Goal: Match low-resolution probe images to higher resolution training/gallery images.
Training Probe
29
Vijayakumar Bhagavatula
Super-Resolution with Face Priors
30
Vijayakumar Bhagavatula
The Image Formation Process
Inverting the image formation process
Super-resolution algorithms aim to invert this process either directly or indirectly.
warp matrixblur matrix
decimation matrix
A. Zomet and S. Peleg, Super-Resolution from Multiple Images having Arbitrary Mutual Motion, in S. Chaudhuri (Editor), Super-Resolution Imaging, Kluwer Academic, Sept. 2001, pp. 195-209.
See, for example:
31
Vijayakumar Bhagavatula
Tikhonov Super-ResolutionInverting the image formation process
By Tikhonov regularization this is
B is the image formation process operatorDownsampling using a model of the detector PSFBlurring my modeling the lens PSF
L represents assumptions about the smoothness of the solution
Low-res input
32
Vijayakumar Bhagavatula
Possible Solutions1. We can apply a super-resolution
algorithm and then classify the result
2. We can downsample the gallery image and match at the resolution of the probe
33
Vijayakumar Bhagavatula
Possible Solutions1. We can apply a super-resolution
algorithm and then classify the result
2. We can downsample the gallery image and match at the resolution of the probe
3. We propose an alternative approach which jointly uses super-resolution methods and includes face features for recognition (S2R2)
34
Vijayakumar Bhagavatula
Still-Image S2R2 Block Diagram
35
Vijayakumar Bhagavatula
36
Vijayakumar Bhagavatula
S2R2 Simultaneous fit
Low-res inputKnown image formation matrix
Minimize the regularized functional with classification constraints (for a claimed kth class):
Regularization parameters are trained to produce distortions that are discriminatory
37
Vijayakumar Bhagavatula
S2R2 Classification
Compute measures-of-fit norms and form a new feature vectorIn this example:
Classify with rk using conventional classification methods
38
Vijayakumar Bhagavatula
39
Vijayakumar Bhagavatula
Using Features at Multiple Resolutions
Since we know the image formation process
Features defined as
F
Gallery image form class k
FL
B
40
Vijayakumar Bhagavatula
Numerical Experiments (I)The MultiPIE database
Total of 337 subjects, compared to 68 of PIESubjects are captured in several recording sessions with different poses, illuminations and expressions, as in PIE
Data set for experimentsUsing frontal view, neutral expressions, different flash illuminationsSequestered 73 subjects as generic training setSequestered 40 subjects to learn regularization parametersThe rest 224 subjects are used as gallery and probesGallery images are not under flash illuminationThere are a total of 2912 probe images.
41
Vijayakumar Bhagavatula
Sample from Multi-PIE
Original
24x24
12x12
6x6
42
Vijayakumar Bhagavatula
Numerical Experiments (II)Proposed framework settings
Base super-resolution algorithm using smoothness constraints as first derivatives approximationsBase features are 25 FisherfacesFinal classifier is a Fisher discriminantThe image formation process is assumed knownTraining resolution is 24x24 pixelsSimultaneous-fit andmeasures-of-fit featuresuse face-features at training resolution andprobe resolution.
43
Vijayakumar Bhagavatula
Still-Image Results Using Multi-PIE
Magnification factor of 2
Magnification factor of 4
Also shown here: (TrR) Matching in the hypothetical case (oracle)
of having probes being available at the base training resolution.
Baselines: (Bil) Bilinear interpolation and then matching (Bic) Bicubic interpolation and then matching (Tik) Tikhonov super-resolution and then
matching (LR) Matching in the low-resolution domain
(MFS2R2) Proposed algorithm
44
Vijayakumar Bhagavatula
Magnification factor of 2S2R2e uses face priors and relative residuals as features
Still-Image Results Using Multi-PIE (II)
45
Vijayakumar Bhagavatula
Magnification factor of 4S2R2 with face priors and relative residuals as features
Still-Image Results Using Multi-PIE (II)
46
Vijayakumar Bhagavatula
S2R2 Summary & ExtensionsThe proposed S2R2 gives super-resolution the objective of recognition, rather than just reconstruction.
We can extract new features by finding a template that fits simultaneously into the available models and features.
We have shown that with simple linear discriminants using these features, we can produce better recognition performance than standard approaches.
This formulation can be easily expanded or generalized to use video, multiple cameras, and even other image representations (such as wavelets) and non-linear features.
47
Vijayakumar Bhagavatula
Iris BiometricPattern source: muscle ligaments (sphincter, dilator), and connective tissue
Inner boundary (pupil)Outer boundary
(sclera)
Sphincter ring
Dilator muscles
Biometric Advantages
Extremely unique pattern.
Remains stable over an individuals lifetime.
48
Vijayakumar Bhagavatula
Circular Edge Detector Gabor Wavelet Analysis
2 bits code
2 bits code
2048 bits iris code
Daugmans Iris Recognition Method
1.Ref: J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Trans. Pattern Anal. Machine Intell., Vol.15, pp. 1148-1161, 1993.
49
Vijayakumar Bhagavatula
Iris Recognition: Correlation FiltersWe use correlation filters for iris recognition. We design a filter for each iris class using a set of training images.
match
no match
FFT-1x
Correlation filter
FFT
Determining an iris match with a correlation filter
Segmented iris pattern
Ref.: J. Thornton, M. Savvides and B.V.K. Vijaya Kumar, A unified Bayesian approach to deformed pattern matching of iris images, IEEE Trans. Patt. Anal. Mach. Intell., vol. 29, 596-606, 2007.
50
Vijayakumar Bhagavatula
Iris Pattern DeformationVideo Example
Landmark points for all images within one class
Clear deformation from:
Tissue changes AND/OR
Deviations in iris boundaries.
51
Vijayakumar Bhagavatula
Eyelid OcclusionExample : Eyelid artifacts in segmented pattern.
upper eyelid
lower eyelid
lower eyelid
Example: match comparison
For significant portion of area, similarity is lost.
52
Vijayakumar Bhagavatula
Iris Matching ApproachProblem summary
Accurate pattern matching when patterns experience
relative nonlinear deformations
partial occlusions
in addition to blurring and observation noise.
PATTERN SAMPLE
PATTERN TEMPLATE
GENERATE EVIDENCE
ESTIMATE STATES
MATCH SCORE
ApproachPROBABILISTIC MODEL:
DEFORM & OCCLUSION STATES
53
Vijayakumar Bhagavatula
Hidden Variables: DeformationIris plane partitioned into 2D field:
Deformation described by vector field:
Ref.: J. Thornton, M. Savvides and B.V.K. Vijaya Kumar, A unified Bayesian approach to deformed pattern matching of iris images, IEEE Trans. Patt. Anal. Mach. Intell., vol. 29, 596-606, 2007.
54
Vijayakumar Bhagavatula
Hidden Variables: Occlusion
Occlusion described by binary field:
Hidden vars:
55
Vijayakumar Bhagavatula
Goal : Infer posterior distribution on hidden states:
Iris Matching ProcessTemplate
New pattern
Similarity evidence
Eyelid evidence
Inference technique :Loopy belief propagation
56
Vijayakumar Bhagavatula
57
Vijayakumar Bhagavatula
ICE Phase I: Performance
Verification Rate at FAR = 0.1%
Experiment 1: 99.63 % Experiment 2: 99.04 %
non-match scores
match scores
Experiment 1 score distribution
58
Vijayakumar Bhagavatula
59
Vijayakumar Bhagavatula
Iris On the Move (IOM)
60
Vijayakumar Bhagavatula
Out-of-Focus Iris Images
61
Vijayakumar Bhagavatula
Pupil Phase Engineering
Courtesy: CDMOptics
62
Vijayakumar Bhagavatula
Wavefront Coded Iris Images
63
Vijayakumar Bhagavatula
Iris Matching Performance: Iris Code
64
Vijayakumar Bhagavatula
Iris Matching Performance: Correlation Filters
65
Vijayakumar Bhagavatula
SummaryCorrelation filters
Achieve excellent performance in face recognition grand challenge (FRGC)Performed very well in iris challenge evaluation (ICE)Also successful in fingerprint recognition and palmprint recognition
Correlation filters provide a single matching engine for a variety of image biometrics --- making multi-biometric approaches feasible.S2R2 enables the use of low-resolution face recognition
66
Vijayakumar Bhagavatula
Our Other Biometrics Research Topics
Fingerprint recognitionPalmprint recognitionCancelable BiometricsImportance of Fourier phase in BiometricsMulti-biometrics; Fusion of biometric informationPose-tolerant face recognitionIris-at-a-distance recognitionLarge population biometric recognitionMulti-camera face recognition
Slide Number 1Acknowledgments Outline TerminologySlide Number 5Biometric Recognition ApproachesCorrelation Pattern RecognitionSlide Number 8Controlled Response to RotationsCorrelation FiltersSlide Number 11CMU PIE DatabaseSlide Number 13Slide Number 14Slide Number 15Slide Number 16Slide Number 17Face Recognition Grand Challenge (FRGC)FRGC Dataset: Experiment 4FRGC Gallery ImagesFRGC Probe ImagesFRGC Probe ImagesFRGC Baseline ResultsCorrelation Filters for Face VerificationClass-dependence Feature Analysis (CFA)Class Dependent Feature Analysis (CFA)Slide Number 27Low-Resolution Face RecognitionSuper-Resolution with Face PriorsThe Image Formation ProcessTikhonov Super-ResolutionPossible SolutionsPossible SolutionsStill-Image S2R2 Block DiagramSlide Number 35S2R2 Simultaneous fitS2R2 ClassificationSlide Number 38Using Features at Multiple ResolutionsNumerical Experiments (I)Sample from Multi-PIENumerical Experiments (II)Still-Image Results Using Multi-PIEStill-Image Results Using Multi-PIE (II)Still-Image Results Using Multi-PIE (II)S2R2 Summary & ExtensionsSlide Number 47Slide Number 48Slide Number 49Slide Number 50Slide Number 51Slide Number 52Slide Number 53Slide Number 54Slide Number 55Slide Number 56Slide Number 57Slide Number 58Iris On the Move (IOM)Out-of-Focus Iris ImagesPupil Phase EngineeringWavefront Coded Iris ImagesIris Matching Performance: Iris CodeIris Matching Performance: Correlation FiltersSummaryOur Other Biometrics Research Topics