4/13/2015
1
Image
SegmentationBy fitting a model
1
2
What is image segmentation?
Technically speaking, image segmentation refers to the decomposition of a scene into different components (thus to facilitate the task at higher levels such as object detection and recognition)
Scientifically speaking, segmentation is a hypothetical middle-level vision task performed by neurons between low-level and high-level cortical areas
4/13/2015
2
Fitting or Grouping: (Based on fitting a
geometrical model) here we have a set of distinct data items, and we collect
sets of data items that make sense together according to our model.
a) Collecting together tokens that, taken together, form a line
or other geometry.
b) Collecting together tokens that seem to share a fundamental
matrix.
The key issues here:
To determine what representation is suitable for the problem in
hand. (Supervised Approach)
3
Hough transform
4
Image spatial space v/s Images Hough Parameter space.
Hough transform means: Transform the image in spatial plane to
hough parameter plane.
i.e. conversion of image from spatial coordinate domain (x, y) to
(m, b) hough parameter plane corresponds to line represented as:
y= mx+b (slope-intercept representation)
(r, ) hough parameter plane corresponds to line represented as:
x cos + y sin =r (polar representation)
m
by
x
4/13/2015
3
(m, b) hough parameter space:
5
Where is the line
that contains both
(x0, y0) and (x1, y1)?
intersection of the
lines b = x0m + y0and b = x1m + y1
-A line in the image
corresponds to a point
in Hough space.
(m, b) hough parameter space.
6
-What does a point (x0, y0) in the image space map to in the
Hough space? There are many lines passing through the point (x0, y0).
Common to them is that they satisfy the equation for some set of
parameters (m, b).
i.e. the solutions of b = x0m + y0 which is a line in hough space.
4/13/2015
4
(r, ) hough parameter space:
7
Problems with the line equation y = mx + b in (m, b) space?
Unbounded parameter domain
Vertical lines require inf inite m so how we can represent the accumulator
array.
-The Alternative: polar representation
The polar (also called normal) representation of straight lines
Each point (x i,yi) in the xy-plane gives a sinusoid in the (, ) parameter
space (or plane).
8
-Each curve in the figure represents the family of lines that pass
through a particular point (xi,yi) in the xy-plane.
4/13/2015
5
(r, ) hough parameter space:
9
N nos.of colinear point lying on the line will give N curves that intersect
at (i, j) in the parameter space or plane : i.e.
Sinusoids corresponding to co-linear points intersect at an unique
point.
e.g.
Line: 0.6x + 0.4y = 2.4
Sinusoids intersect at: = 2.4, = 0.9273
Hough Transform Algorithm
10
4/13/2015
6
Given the following points and discrete value of and the calculated value of = x.cos +y.sin
11
S.No.
(x, y) -450 00 450 900
1 (2, 0) 1.4 2 1.4 0
2 (1,1) 0 1 1.4 1
3 (2, 1) 0.7 2 2.1 1
4 (1, 3) -1.4 1 2.8 3
5 (2, 3) -0.7 2 3.5 3
6 (4, 3) 0.7 4 4.9 3
7 (3, 4) -0.7 3 4.9 4
S.No
.
-450 00 450 900
1 -1.4 12 -0.7 23 0 1 14 0.7 25 1 2 26 1.4 1 27 2 38 2.1 19 2.8 110 3 1 311 3.5 112 4 1 213 4.9 2
Accumulator matrix
The two equal largest values occurs at (, ) = (2, 00) and (3, 900). Then the lines are:
x cos 00 + ysin 00 = 2 i.e. x = 2.x cos 900 + ysin 900 = 3 i.e. y = 3.
Data points
12
Data without outliers or noise
4/13/2015
7
13
In presence of outliers or noise
14
Random data points
4/13/2015
8
Least Square method to fit a line
Fitting aim: To determine values for the slope "m" and the
intercept "b" in an equation:
y = m x + b
Fitting requires definition of some measure of the error between
the data and the line. The Overall measure of error E(m, b):
Best fit when error belong to Gaussian distribution.
Now find m and b values that could minimize the error for best
fit . So to get minima, find the derivative of E with respect to m
and b.15
Derivative with respect to m:
16
Least Square method to fit a line
Eq.1
4/13/2015
9
Derivative with respect to b:
17
Least Square method to fit a line
Eq.2
In standard notation, these two equations can be written as:
18
Least Square method to fit a line
Now value for m and b can be given as:
4/13/2015
10
19
Least Square method to fit a line
20
Least Square method to fit a line
4/13/2015
11
21
RANSAC (RANdom SAmpling Consensus)
Ransac is a robust method for fitting a line in the presence of
much more outliers.
View estimation as a two-stage process:
-Classify data points as outliers or inliers
-Fit model to inliers
RANSAC is a re-sampling technique that generates candidate
solutions by using the minimum number observations (data
points) required to estimate the underlying model parameters.
Developed by M. A. Fischler and R. C. Bolles.
Outline of the RANSAC Randomly select a sample of s data points from S and instantiate
the model from this subset.
Determine the set of data points Si which is within a distance threshold t of the model. This set Si, is the consensus set of the sample and defines the inliers of S.
If the size of Si (the number of inliers) is greater than some threshold T, re-estimate the model using all the points in Si and
terminate.
If the size of Si is less than T, select a new subset and repeat the above.
After N trials the largest consensus set Si is selected, and the model is re-estimated using all the points in the subset Si. 22
4/13/2015
12
23
Example of RANSAC
24
4/13/2015
13
Example of RANSAC
25
Example of RANSAC
26
4/13/2015
14
Example of RANSAC
27
Example of RANSAC
28
4/13/2015
15
Example of RANSAC
29
Example of RANSAC
30
4/13/2015
16
Example of RANSAC
31
Best Consensus as per all the sampling in the
complete process.
Example of RANSAC
32
Again low consensus due to further random sampling.
4/13/2015
17
Comparison of Least square & RANSAC
33
Least square based fitting RANSAC based fitting
How Many Samples are Necessary (N)? Using all possible samples is often infeasible.
Instead, pick N to assure probability p of at least one sample (containing s points) being all inliers.
Let
a) Probability that any selected data point is an inlier = u
b) Probability of observing an outlier. = v = 1 u.
Then
N iterations of the samples can be given as:
1 p = (1 us)N
Or
34
4/13/2015
18
Example: N for the line-fitting problem
n = 12 points. (total nos. of points)
Minimal sample size s = 2.
If 2 outliers then v = 2/12 = 1/6 = 20% (proportion of outliers)
So for probability p = 0.99 (giving us a 99% chance of getting a pure-inlier sample)
The value of N = 5
35
Analysis of RANSAC
36