+ All Categories
Home > Documents > Face Recognition and Feature Subspaces

Face Recognition and Feature Subspaces

Date post: 22-Jan-2016
Category:
Upload: drew
View: 31 times
Download: 0 times
Share this document with a friend
Description:
03/09/10. Face Recognition and Feature Subspaces. Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem. Some slides from Lana Lazebnik, Silvio Savarese, Fei-Fei Li. Object recognition. Last Class Object instance recognition: focus on localization of miscellaneous objects - PowerPoint PPT Presentation
Popular Tags:
55
Face Recognition and Feature Subspaces Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/09/10 lides from Lana Lazebnik, Silvio Savarese, Fei-Fei Li
Transcript
Page 1: Face Recognition and Feature Subspaces

Face Recognition and Feature Subspaces

Computer VisionCS 543 / ECE 549

University of Illinois

Derek Hoiem

03/09/10

Some slides from Lana Lazebnik, Silvio Savarese, Fei-Fei Li

Page 2: Face Recognition and Feature Subspaces

Object recognitionLast Class• Object instance recognition: focus on localization of

miscellaneous objects

This class• Face recognition: focus on distinguishing one face from

another• Feature subspaces: PCA and FLD• Look at results from recent vendor test• Look at interesting findings about human face

recognition

Page 3: Face Recognition and Feature Subspaces

Face detection and recognition

Detection Recognition “Sally”

Page 4: Face Recognition and Feature Subspaces

Applications of Face Recognition• Digital photography

Page 5: Face Recognition and Feature Subspaces

Applications of Face Recognition• Digital photography• Surveillance

Page 6: Face Recognition and Feature Subspaces

Applications of Face Recognition• Digital photography• Surveillance• Album organization

Page 7: Face Recognition and Feature Subspaces

Consumer application: iPhoto 2009

http://www.apple.com/ilife/iphoto/

Page 8: Face Recognition and Feature Subspaces

Consumer application: iPhoto 2009• Can be trained to recognize pets!

http://www.maclife.com/article/news/iphotos_faces_recognizes_cats

Page 9: Face Recognition and Feature Subspaces

Consumer application: iPhoto 2009• Things iPhoto thinks are faces

Page 10: Face Recognition and Feature Subspaces

Starting idea of “eigenfaces”

1. Treat pixels as a vector

2. Recognize face by nearest neighbor

x

nyy ...1

xy Tk

kk argmin

Page 11: Face Recognition and Feature Subspaces

The space of all face images• When viewed as vectors of pixel values, face images are

extremely high-dimensional– 100x100 image = 10,000 dimensions– Slow and lots of storage

• But very few 10,000-dimensional vectors are valid face images

• We want to effectively model the subspace of face images

Page 12: Face Recognition and Feature Subspaces

The space of all face images• Eigenface idea: construct a low-dimensional linear

subspace that best explains the variation in the set of face images

Page 13: Face Recognition and Feature Subspaces

Principal Component Analysis (PCA)• Given: N data points x1, … ,xN in Rd

• We want to find a new set of features that are linear combinations of original ones:

u(xi) = uT(xi – µ)

(µ: mean of data points)

• Choose unit vector u in Rd that captures the most data variance

Forsyth & Ponce, Sec. 22.3.1, 22.3.2

Page 14: Face Recognition and Feature Subspaces

Principal Component Analysis• Direction that maximizes the variance of the projected data:

Projection of data point

Covariance matrix of data

The direction that maximizes the variance is the eigenvector associated with the largest eigenvalue of Σ

N

N

1/N

(show on board)

Maximizesubject to ||u||=1

Page 15: Face Recognition and Feature Subspaces

Implementation issue

• Covariance matrix is huge (N2 for N pixels)

• But typically # examples << N

• Simple trick– X is matrix of normalized training data– Solve for eigenvectors u of XXT instead of XTX– Then XTu is eigenvector of covariance XTX– May need to normalize (to get unit length vector)

Page 16: Face Recognition and Feature Subspaces

Eigenfaces (PCA on face images)

1. Compute covariance matrix of face images

2. Compute the principal components (“eigenfaces”)

– K eigenvectors with largest eigenvalues

3. Represent all face images in the dataset as linear combinations of eigenfaces

– Perform nearest neighbor on these coefficients

M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991

Page 17: Face Recognition and Feature Subspaces

Eigenfaces example

• Training images• x1,…,xN

Page 18: Face Recognition and Feature Subspaces

Eigenfaces exampleTop eigenvectors: u1,…uk

Mean: μ

Page 19: Face Recognition and Feature Subspaces

Visualization of eigenfacesPrincipal component (eigenvector) uk

μ + 3σkuk

μ – 3σkuk

Page 20: Face Recognition and Feature Subspaces

Representation and reconstruction• Face x in “face space” coordinates:

=

Page 21: Face Recognition and Feature Subspaces

Representation and reconstruction• Face x in “face space” coordinates:

• Reconstruction:

= +

µ + w1u1+w2u2+w3u3+w4u4+ …

=

^x =

Page 22: Face Recognition and Feature Subspaces

P = 4

P = 200

P = 400

Reconstruction

After computing eigenfaces using 400 face images from ORL face database

Page 23: Face Recognition and Feature Subspaces

Eigenvalues (variance along eigenvectors)

Page 24: Face Recognition and Feature Subspaces

NotePreserving variance (minimizing MSE) does not necessarily lead to qualitatively good reconstruction.

P = 200

Page 25: Face Recognition and Feature Subspaces

Recognition with eigenfacesProcess labeled training images• Find mean µ and covariance matrix Σ• Find k principal components (eigenvectors of Σ) u1,…uk

• Project each training image xi onto subspace spanned by principal components:(wi1,…,wik) = (u1

T(xi – µ), … , ukT(xi – µ))

Given novel image x• Project onto subspace:

(w1,…,wk) = (u1T(x – µ), … , uk

T(x – µ))• Optional: check reconstruction error x – x to determine

whether image is really a face• Classify as closest training face in k-dimensional subspace

^

M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991

Page 26: Face Recognition and Feature Subspaces

PCA

• General dimensionality reduction technique

• Preserves most of variance with a much more compact representation– Lower storage requirements (eigenvectors + a few

numbers per face)– Faster matching

• What are the problems for face recognition?

Page 27: Face Recognition and Feature Subspaces

Limitations

Global appearance method: not robust to misalignment, background variation

Page 28: Face Recognition and Feature Subspaces

Limitations• The direction of maximum variance is not

always good for classification

Page 29: Face Recognition and Feature Subspaces

A more discriminative subspace: FLD• Fisher Linear Discriminants “Fisher Faces”

• PCA preserves maximum variance

• FLD preserves discrimination– Find projection that maximizes scatter between

classes and minimizes scatter within classes

Reference: Eigenfaces vs. Fisherfaces, Belheumer et al., PAMI 1997

Page 30: Face Recognition and Feature Subspaces

Comparing with PCAComparing with PCA

Page 31: Face Recognition and Feature Subspaces

Variables

• N Sample images: • c classes:

• Average of each class:

• Average of all data:

Nxx ,,1

c ,,1

ikx

ki

i xN

1

N

kkxN 1

1

Page 32: Face Recognition and Feature Subspaces

Scatter Matrices

• Scatter of class i: Tikx

iki xxSik

c

iiW SS

1

c

i

TiiiB NS

1

• Within class scatter:

• Between class scatter:

Page 33: Face Recognition and Feature Subspaces

Illustration

2S

1S

BS

21 SSSW

x1

x2Within class scatter

Between class scatter

Page 34: Face Recognition and Feature Subspaces

Mathematical Formulation• After projection

– Between class scatter

– Within class scatter

• Objective

• Solution: Generalized Eigenvectors

• Rank of Wopt is limited

– Rank(SB) <= |C|-1

– Rank(SW) <= N-C

kT

k xWy

WSWS BT

B ~

WSWS WT

W ~

WSW

WSW

S

SW

WT

BT

W

Bopt

WWmax arg~

~max arg

miwSwS iWiiB ,,1

Page 35: Face Recognition and Feature Subspaces

Illustration

2S

1S

BS

21 SSSW

x1

x2

Page 36: Face Recognition and Feature Subspaces

Recognition with FLD• Similar to “eigenfaces”

• Compute within-class and between-class scatter matrices

• Solve generalized eigenvector problem

• Project to FLD subspace and classify by nearest neighbor

WSW

WSWW

WT

BT

optW

max arg miwSwS iWiiB ,,1

Tikx

iki xxSik

c

iiW SS

1

c

i

TiiiB NS

1

xWx Toptˆ

Page 37: Face Recognition and Feature Subspaces

Results: Eigenface vs. Fisherface

• Variation in Facial Expression, Eyewear, and Lighting

• Input:160 images of 16 people• Train:159 images• Test: 1 image

With glasses

Without glasses

3 Lighting conditions

5 expressions

Reference: Eigenfaces vs. Fisherfaces, Belheumer et al., PAMI 1997

Page 38: Face Recognition and Feature Subspaces

Eigenfaces vs. Fisherfaces

Reference: Eigenfaces vs. Fisherfaces, Belheumer et al., PAMI 1997

Page 39: Face Recognition and Feature Subspaces

Large scale comparison of methods• FRVT 2006 Report• Not much (or any) information available about

methods, but gives idea of what is doable

Page 40: Face Recognition and Feature Subspaces

FVRT Challenge

• Frontal faces– FVRT2006 evaluation

False Rejection Rate at False Acceptance Rate = 0.001

Page 41: Face Recognition and Feature Subspaces

FVRT Challenge

• Frontal faces– FVRT2006 evaluation: controlled illumination

Page 42: Face Recognition and Feature Subspaces

FVRT Challenge

• Frontal faces– FVRT2006 evaluation: uncontrolled illumination

Page 43: Face Recognition and Feature Subspaces

FVRT Challenge

• Frontal faces– FVRT2006 evaluation: computers win!

Page 44: Face Recognition and Feature Subspaces

Face recognition by humans

Face recognition by humans: 20 results (2005)

Slides by Jianchao Yang

Page 45: Face Recognition and Feature Subspaces
Page 46: Face Recognition and Feature Subspaces
Page 47: Face Recognition and Feature Subspaces
Page 48: Face Recognition and Feature Subspaces
Page 49: Face Recognition and Feature Subspaces
Page 50: Face Recognition and Feature Subspaces
Page 51: Face Recognition and Feature Subspaces
Page 52: Face Recognition and Feature Subspaces
Page 53: Face Recognition and Feature Subspaces
Page 54: Face Recognition and Feature Subspaces

Things to remember

• PCA is a generally useful dimensionality reduction technique– But not ideal for discrimination

• FLD better for discrimination, though only ideal under Gaussian data assumptions

• Computer face recognition works very well under controlled environments – still room for improvement in general conditions

Page 55: Face Recognition and Feature Subspaces

Next class

• Image categorization: features and classifiers


Recommended