+ All Categories
Transcript

Iterative Closest Point (ICP)Algorithm.L1 solution. . .

Yaroslav Halchenko

CS @ NJIT

Iterative Closest Point (ICP) Algorithm. – p. 1

Registration

0

50

100

150

200

250Max= 250 Min= 0

50 100 150 200 250

50

100

150

200

250 0

50

100

150

200

250Max= 254 Min= 0

50 100 150 200 250

50

100

150

200

250

0

50

100

150

200

250Max= 254 Min= 0

50 100 150 200 250

50

100

150

200

250 −0.05

0

0.05

0.1

−0.05

0

0.05

0

0.05

0.1

Iterative Closest Point (ICP) Algorithm. – p. 2

Registration

−0.05

0

0.05

0.1

−0.05

0

0.05

0

0.05

0.1

Iterative Closest Point (ICP) Algorithm. – p. 3

Iterative Closest Point

ICP is a straightforward method [Besl 1992] to align twofree-form shapes (model X , object P ):

Initial transformation

Iterative procedure to converge to local minima1. ∀p ∈ P find closest point x ∈ X

2. Transform Pk+1 ← Q(Pk) to minimize distancesbetween each p and x

3. Terminate when change in the error falls below apreset threshold

Choose the best among found solutions for differentinitial positions

Iterative Closest Point (ICP) Algorithm. – p. 4

Specifics of Original ICP

Converges to local minima

Based on minimizing squared-error

Suggests ‘Accelerated ICP’

Iterative Closest Point (ICP) Algorithm. – p. 5

ICP Refinements

Different methods/strategies

to speed-up closest point selectionK-d trees, dynamic cachingsampling of model and object points

to avoid local minimaremoval of outliersstochastic ICP, simulated annealing, weightinguse other metrics (point-to-surface vs -point)use additional information besides geometry(color, curvature)

Iterative Closest Point (ICP) Algorithm. – p. 6

ICP Refinements

Different methods/strategies

to speed-up closest point selectionK-d trees, dynamic cachingsampling of model and object points

to avoid local minimaremoval of outliersstochastic ICP, simulated annealing, weightinguse other metrics (point-to-surface vs -point)use additional information besides geometry(color, curvature)

All closed-form solutions are for squared-error ondistances

Iterative Closest Point (ICP) Algorithm. – p. 6

Found on the Web

Tons of papers/reviews/articles

No publicly available Matlab code

Registration Magic Toolkit(http://asad.ods.org/RegMagicTKDoc) - fullfeatured registration toolkit with modified ICP

Iterative Closest Point (ICP) Algorithm. – p. 7

Implemented in This Work

Original ICP Method [Besl 1992]

Choice for caching of computed distances

Iterative Closest Point (ICP) Algorithm. – p. 8

Absolute Distances or L1 norm

Why bother?

More stable to presence of outliers

Better statistical estimator in case of non-gaussiannoise (sparse, high-kurtosis)

might help to avoid local minima’s

Iterative Closest Point (ICP) Algorithm. – p. 9

Absolute Distances or L1 norm

Why bother?

More stable to presence of outliers

Better statistical estimator in case of non-gaussiannoise (sparse, high-kurtosis)

might help to avoid local minima’s

How?

use some parametric approximation for y = |x| anddo non-linear optimization

present this as a convex linear programming problem

Iterative Closest Point (ICP) Algorithm. – p. 9

LP: Formulation

Absolute Values y = |x|

x ≤ y and −x ≤ y while minimizing y

Euclidean Distance ‖~v‖ =

v2x + v2

y

×

×

×

×

×

×

×

3.543.54

0.00×

4.582.00

1.344.82

×

×

5.003.541.34×

×

×

0.004.82

~v

|rx~v| ≤ ‖~v‖, |ry~v| ≤ ‖~v‖

Iterative Closest Point (ICP) Algorithm. – p. 10

LP: Rigid Transformation

Arguments: rotation matrix R and translation vector ~tRigid Transformation:

~̇p = R~p + ~t

Iterative Closest Point (ICP) Algorithm. – p. 11

LP: Rigid Transformation

Arguments: rotation matrix R and translation vector ~tRigid Transformation:

~̇p = R~p + ~t

Problem: How to ensure that R is rotation matrix?“Solution”: Take a set of “support” vectors in objectspace and specify their length explicitly.

‖~̇pj − ~̇pk‖ − ‖~pj − ~pk‖ = 0 ~pi, ~pj ∈ P

Iterative Closest Point (ICP) Algorithm. – p. 11

LP

~̇p = R~p + ~t

‖~̇pi − ~xi‖ − di = 0 ∀i, s.t. ~pi ∈ P, ~xi ∈ X

‖~̇pj − ~̇pk‖ − ‖~pj − ~pk‖ = 0 ~pi, ~pj ∈ P

Objective: minimize C =∑

i di

Iterative Closest Point (ICP) Algorithm. – p. 12

LP: Problems

Contraction (shrinking):

‖~̇pj − ~̇pk‖ − ‖~pj − ~pk‖ = 0

is actually

‖~̇pj − ~̇pk‖ − ‖~pj − ~pk‖ ≤ 0

R matrix needs to be “normalized” to the nearestorthonormal matrix due to our ‖x‖ LPapproximation even if no contraction occurred.

Iterative Closest Point (ICP) Algorithm. – p. 13

LP: Results

−1

−0.5

0

0.5

1

−1

−0.5

0

0.5

10

0.2

0.4

0.6

0.8

1

Iterative Closest Point (ICP) Algorithm. – p. 14

LP: Results

0 100 200 300 400 500 6000

0.05

0.1

0.15

0.2

0.25

Rot

atio

n (R

)

# of outliers

0 100 200 300 400 500 600

0.1

0.2

0.3

0.4

Tra

nsla

tion

(t)

# of outliers

2nd norm1st norm

Iterative Closest Point (ICP) Algorithm. – p. 15

LP: Conclusions

Presented problem is suitable to minimize L1 errorinstead of L2 error commonly used.

Using L1 norm improved solution in the presence ofstrong outliers.

Iterative Closest Point (ICP) Algorithm. – p. 16


Top Related