Point Distribution Models - CMPcmp.felk.cvut.cz/.../slidy/pointdistributionmodels.pdfPoint...

Post on 24-Jun-2020

3 views 0 download

transcript

Point Distribution Models

Jan Kybic

winter semester 2007

Point distribution models

(Cootes et al., 1992)

I Shape description techniques

I A family of shapes = mean + eigenvectors (eigenshapes)

I Shapes described by points

Point distribution model procedure

Input:

I M training samples

I N points each

xi = (x i1, y

i1, x

i2, y

i2, . . . , x

iN , y i

N)T

Procedure:

I Rigidly align all shapes

I Calculate the mean and the covariance matrix

I PCA (eigen analysis) — find principal modes

Rigid alignment

before alignment after alignment

Aligning two shapes

x(1) = (x(1)1 , y

(1)1 , x

(1)2 , y

(1)2 , . . . , x

(1)N , y

(1)N )T

x(2) = (x(2)1 , y

(2)1 , x

(2)2 , y

(2)2 , . . . , x

(2)N , y

(2)N )T

Find a transformation (rotation, translation, scaling) of x(2)

T (x(2)) = s R

[x

(2)i

y(2)i

]+

[txty

]=

[x

(2)i s cos θ − y

(2)i s sin θ

x(2)i s sin θ + y

(2)i s cos θ

]+

[txty

]such that a sum of squared distances is minimized

E =M∑i=1

wi

∥∥∥∥∥ s

[cos θ − sin θsin θ cos θ

][x

(2)i

y(2)i

]+

[txty

]−

[x

(1)i

y(1)i

] ∥∥∥∥∥2

Aligning two shapes

E =M∑i=1

wi

∥∥∥∥∥ s

[cos θ − sin θsin θ cos θ

][x

(2)i

y(2)i

]+

[txty

]−

[x

(1)i

y(1)i

] ∥∥∥∥∥2

Minimize E (θ, s, tx , ty ) as minθ mins,tx ,ty Eθ(s, tx , ty )

I Inner minimization wrt s, tx , ty

∂E

∂tx= 0 ,

∂E

∂ty= 0 ,

∂E

∂s= 0 ,

I Outer minimization wrt θOne dimensional functional minimization, e.g. Brent’s routineor golden section search.

Aligning two shapes

I Inner minimization wrt s, tx , ty

∂E

∂tx= 0 ,

∂E

∂ty= 0 ,

∂E

∂s= 0 ,

leads to linear equations:

s∑M

i=1 wi q( yi ,−xi , θ)− N tx = −∑M

i=1 wi x′i

s∑M

i=1 wi q(−xi ,−yi , θ)− N ty = −∑M

i=1 wi y′i

s∑M

i=1 w2i

(q2(yi ,−xi , θ) + q2(xi , yi , θ)

)− tx

∑Mi=1 wi q(yi ,−xi , θ)

− ty∑M

i=1 wi q(−xi ,−yi , θ)

= −M∑i=1

wi x′i q(yi ,−xi , θ) +

M∑i=1

wi y′i q(xi ,−yi , θ)

where q(a, b, θ) = a sin θ + b cos θ.

I Outer minimization wrt θOne dimensional functional minimization, e.g. Brent’s routineor golden section search.

Aligning two shapes

I Inner minimization wrt s, tx , tyI Outer minimization wrt θ

One dimensional functional minimization, e.g. Brent’s routineor golden section search.

Aligning all training shapes

Before alignment After alignment

Aligning all training shapes

I Align each xi with x1, for i = 2, 3, . . . ,M, obtaining{x1, x2, x3, . . . , xM}.

I Calculate the mean x =[x1, y1, x2, y2, . . . , xN , yN

]of the

aligned shapes {x1, x2, x3, . . . , xM}.

xj =1

M

M∑i=1

x ij and yj =

1

M

M∑i=1

y ij .

I Align the mean shape x with x1. (Necessary for convergence.)

I Align x2, x3, . . . , xM to the adjusted mean.

I Repeat until convergence.

Aligning all training shapes

I Align each xi with x1, for i = 2, 3, . . . ,M, obtaining{x1, x2, x3, . . . , xM}.

I Calculate the mean x =[x1, y1, x2, y2, . . . , xN , yN

]of the

aligned shapes {x1, x2, x3, . . . , xM}.I Align the mean shape x with x1. (Necessary for convergence.)

I Align x2, x3, . . . , xM to the adjusted mean.

I Repeat until convergence.

Aligning all training shapes

I Align each xi with x1, for i = 2, 3, . . . ,M, obtaining{x1, x2, x3, . . . , xM}.

I Calculate the mean x =[x1, y1, x2, y2, . . . , xN , yN

]of the

aligned shapes {x1, x2, x3, . . . , xM}.I Align the mean shape x with x1. (Necessary for convergence.)

I Align x2, x3, . . . , xM to the adjusted mean.

I Repeat until convergence.

Aligning all training shapes

I Align each xi with x1, for i = 2, 3, . . . ,M, obtaining{x1, x2, x3, . . . , xM}.

I Calculate the mean x =[x1, y1, x2, y2, . . . , xN , yN

]of the

aligned shapes {x1, x2, x3, . . . , xM}.I Align the mean shape x with x1. (Necessary for convergence.)

I Align x2, x3, . . . , xM to the adjusted mean.

I Repeat until convergence.

We have obtained M (mutually aligned) boundaries x1, x2, . . . , xM

and the mean x.

Deriving the model

We have M boundaries x1, x2, . . . , xM and the mean x.

I Variation from the mean for each training shape

δxi = xi − x .

I Covariance matrix S (2N × 2N)

S =1

M

M∑i=1

δxi (δxi )T

I Principal component analysis

Deriving the model

We have M boundaries x1, x2, . . . , xM and the mean x.

I Variation from the mean for each training shape

δxi = xi − x .

I Covariance matrix S (2N × 2N)

S =1

M

M∑i=1

δxi (δxi )T

I Principal component analysis

Principal component analysis

I Eigen decomposition

Spi = λipi

P =[p1p2p3 . . .p2N

]We know eigenvalues λi are real because S is symmetric,positive definite. Eigenvectors (principal components) pi areorthogonal, so P is a basis and any vector x can berepresented as

x = x + Pb

I Order eigenvectors pi and eigenvalues λi such thatλ1 ≥ λ2 ≥ λ3 ≥ . . . λ2N . Most changes are then described bythe first few eigenvectors.

I Consider only K largest eigenvalues.

Approximation x ≈ x + PK bK

with PK =[p1p2p3 . . .pK

]bt =

[b1, b2, . . . , bK

]T

Choose the smallest K , such that∑K

i=1 λi ≥ α∑N

i=1 λi .

Principal component analysis

I Eigen decomposition

Spi = λipi

P =[p1p2p3 . . .p2N

]We know eigenvalues λi are real because S is symmetric,positive definite. Eigenvectors (principal components) pi areorthogonal, so P is a basis and any vector x can berepresented as

x = x + Pb

I Order eigenvectors pi and eigenvalues λi such thatλ1 ≥ λ2 ≥ λ3 ≥ . . . λ2N . Most changes are then described bythe first few eigenvectors.

I Consider only K largest eigenvalues.

Approximation x ≈ x + PK bK

with PK =[p1p2p3 . . .pK

]bt =

[b1, b2, . . . , bK

]T

Choose the smallest K , such that∑K

i=1 λi ≥ α∑N

i=1 λi .

Principal component analysis

I Eigen decomposition

Spi = λipi

P =[p1p2p3 . . .p2N

]x = x + Pb

I Order eigenvectors pi and eigenvalues λi such thatλ1 ≥ λ2 ≥ λ3 ≥ . . . λ2N . Most changes are then described bythe first few eigenvectors.

I Consider only K largest eigenvalues.

Approximation x ≈ x + PK bK

with PK =[p1p2p3 . . .pK

]bt =

[b1, b2, . . . , bK

]T

Choose the smallest K , such that∑K

i=1 λi ≥ α∑N

i=1 λi .

Point distribution model

I Input: M non-aligned boundaries x1, x2, . . . , xM .

I Output: mean x and reduced eigvector matrix PK

I New shape generation:

x = x + PK bK

For “well-behaved’ shapes

−3√

λi ≤ bi ≤ 3√

λi

Point distribution model

I Input: M non-aligned boundaries x1, x2, . . . , xM .

I Output: mean x and reduced eigvector matrix PK

I New shape generation:

x = x + PK bK

For “well-behaved’ shapes

−3√

λi ≤ bi ≤ 3√

λi

PDM example

Before alignment After alignment, mean shape

PDM example

First mode Second mode

The mean shape is in red, the shape corresponding to −3√

λ inblue and the shape corresponding to +3

√λ in green.

Active shape models

PDM Image to fit

Fit a learned point distribution model (PDM) to a given image.

Pose and shape parameters

I Point distribution model (PDM) consists ofI mean pI eigenvectors P

I Fitted model given by:I pose parameters: θ, s, tx , tyI shape parameters: b

p = Pb + p

p =[p1 p2 . . . pN

]p =

[x1 y1 x2 y2 . . . xN yN

][xi

yi

]= s

[cos θ − sin θsin θ − cos θ

] [xi

yi

]+

[txty

]p =

[x1 y1 x2 y2 . . . xN yN

]

Pose and shape parameters

I Point distribution model (PDM) consists ofI mean pI eigenvectors P

I Fitted model given by:I pose parameters: θ, s, tx , tyI shape parameters: b

p = Pb + p

p =[p1 p2 . . . pN

]p =

[x1 y1 x2 y2 . . . xN yN

][xi

yi

]= s

[cos θ − sin θsin θ − cos θ

] [xi

yi

]+

[txty

]p =

[x1 y1 x2 y2 . . . xN yN

]

Fitting

p = Ts,θ,tx ,ty (p) = s Qθp + rtx ,ty where p = Pb + p

I Calculate an edge map of the image

I For each landmark pi we find a line normal to the shape contour.

I New position p′i is the maximum of the edge map on the line. (Ifmaximum too weak, no change.)

I Adjust pose parameters θ, s, tx , ty by the alignment algorithm.

I Adjust shape parameters b as follows:

p′i = T−1(p′i )

b′ = P−1(p′i − p) = PT (p′i − p)

I Repeat until convergence

Fitting

I Calculate an edge map of the image

I For each landmark pi we find a line normal to the shape contour.

I New position p′i is the maximum of the edge map on the line. (Ifmaximum too weak, no change.)

50 100 150 200 250 300 350

50

100

150

200

250

300

350

I Adjust pose parameters θ, s, tx , ty by the alignment algorithm.

I Adjust shape parameters b as follows:

p′i = T−1(p′i )

b′ = P−1(p′i − p) = PT (p′i − p)

I Repeat until convergence

Fitting

p = Ts,θ,tx ,ty (p) = s Qθp + rtx ,ty where p = Pb + p

I Calculate an edge map of the image

I For each landmark pi we find a line normal to the shape contour.

I New position p′i is the maximum of the edge map on the line. (Ifmaximum too weak, no change.)

I Adjust pose parameters θ, s, tx , ty by the alignment algorithm.

I Adjust shape parameters b as follows:

p′i = T−1(p′i )

b′ = P−1(p′i − p) = PT (p′i − p)

I Repeat until convergence

Fitting

p = Ts,θ,tx ,ty (p) = s Qθp + rtx ,ty where p = Pb + p

I Calculate an edge map of the image

I For each landmark pi we find a line normal to the shape contour.

I New position p′i is the maximum of the edge map on the line. (Ifmaximum too weak, no change.)

I Adjust pose parameters θ, s, tx , ty by the alignment algorithm.

I Adjust shape parameters b as follows:

p′i = T−1(p′i )

b′ = P−1(p′i − p) = PT (p′i − p)

I Repeat until convergence

Fitting

p = Ts,θ,tx ,ty (p) = s Qθp + rtx ,ty where p = Pb + p

I Calculate an edge map of the image

I For each landmark pi we find a line normal to the shape contour.

I New position p′i is the maximum of the edge map on the line. (Ifmaximum too weak, no change.)

I Adjust pose parameters θ, s, tx , ty by the alignment algorithm.

I Adjust shape parameters b as follows:

p′i = T−1(p′i )

b′ = P−1(p′i − p) = PT (p′i − p)

I Repeat until convergence

Example

50 100 150 200 250 300 350

50

100

150

200

250

300

350

Hand image Edge map + initial shape.

The image is smoothed and a gradient magnitude image calculated ineach color channel. The edge map is a maximum over the three colorchannels, thresholded to obtain a clean background.

Example

50 100 150 200 250 300 350

50

100

150

200

250

300

350

50 100 150 200 250 300 350

50

100

150

200

250

300

350

First iteration Final postition