+ All Categories
Home > Documents > Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003 Lecture 31.

Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003 Lecture 31.

Date post: 30-Dec-2015
Category:
Upload: godfrey-barnett
View: 214 times
Download: 1 times
Share this document with a friend
21
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003 Lecture 31
Transcript
Page 1: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Segmentation

Course web page:vision.cis.udel.edu/~cv

May 7, 2003 Lecture 31

Page 2: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Announcements

• Read Chapter 22-22.1 in Forsyth & Ponce on classification for Friday

• HW 5 due Friday• Class on Friday is at 4 pm due to

Honors day

Page 3: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Outline

• Graph theory basics• Eigenvector methods for

segmentation• Hough transform

Page 4: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Graph Theory Terminology

• Graph G: Set of vertices V and edges E connecting pairs of vertices

• Each edge is represented by the vertices (a, b) it joins

• A weighted graph has a weight associated with each edge w(a, b)

• Connectivity– Vertices are connected if there is a sequence of edges

joining them– A graph is connected if all vertices are connected– Any graph can be partitioned into connected

components (CC) such that each CC is a connected graph and there are no edges between vertices in different CCs

from Forsyth & Ponce4

Page 5: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Graphs for Clustering

• Tokens are vertices• Weights on edges proportional to token similarity• Cut: “Weight” of edges joining two sets of vertices:

• Segmentation: Look for minimum cut in graph– Recursively cut components until regions uniform enough

A B

Page 6: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Graphs for Clustering

• Tokens are vertices• Weights on edges proportional to token similarity• Cut: “Weight” of edges joining two sets of vertices:

• Segmentation: Look for minimum cut in graph– Recursively cut components until regions uniform enough

Page 7: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Representing Graphs As Matrices

• Use N £ N matrix W for N–vertex graph

• Entry W(i, j) is weight on edge between vertices i and j

• Undirected graphs have symmetric weight matrices

from Forsyth & Ponce

1

2

3

4

5

7

6

9

81 1

Example graph and its weight matrix

Page 8: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Affinity Measures

• Affinity A(i, j) between tokens i and j should be proportional to similarity

• Based on metric on some visual feature(s)– Position: E.g., A(i, j) = exp f¡(i ¡ j)T (i ¡ j)/2¾d

2 g – Intensity– Color– Texture

• These are weights in an affinity graph A over tokens

Page 9: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Eigenvectors and Segmentation

• Given k tokens with affinities defined by A, want partition into c clusters

• For a particular cluster n, denote the membership weights of the tokens with the vector wn – Require normalized weights so that

• “Best” assignment of tokens to cluster n is achieved by selecting wn that maximizes objective function (highest intra-cluster affinity)

subject to weight vector normalization constraint• Using method of Lagrange multipliers, this yields system of

equations

which means that wn is an eigenvector of A and a solution is obtained from the eigenvector with the largest eigenvalue

Page 10: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Eigenvectors and Segmentation

• Note that an appropriate rearrangement of affinity matrix leads to block structure indicating clusters

• Largest eigenvectors A of tend to correspond to eigenvectors of blocks

• So interpret biggest c eigenvectors as cluster membership weight vectors– Quantize weights to 0 or 1 to make memberships

definite

from Forsyth & Ponce

1

2

3

45

7

6

9

81 1

Page 11: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Normalized Cuts• Previous approach doesn’t work when eigenvalues of

blocks are similar– Just using within-cluster similarity doesn’t account for

between-cluster differences– No encouragement of larger cluster sizes

• Define association between vertex subset A and full set V as

• Before, we just maximized assoc(A, A); now we also want to minimize assoc(A, V). Define the normalized cut as

Page 12: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Normalized Cut Algorithm• Define diagonal degree matrix D(i, i) = §j A(i, j)• Define integer membership vector x over all vertices such that each

element is 1 if the vertex belongs to cluster A and -1 if it belongs to B (i.e., just two clusters)

• Define real approximation to x as

• This yields the following objective function to minimize:

which sets up the system of equations• The eigenvector with second smallest eigenvalue is the solution (smallest

always 0)• Continue partitioning clusters if normcut is over some threshold

Page 13: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Example: Normalized Cut Segmentations

Affinity measures are position, intensity, texturefrom Forsyth & Ponce

Page 14: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Shape Finding

• Problem: How to efficiently find instances of shapes in image– E.g., lines, curves, ellipses

• Segmentation in the sense of figure-ground separation

Finding lane lines for driving after edge detectionfrom B. Southall & C. Taylor

Page 15: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Hough Transform

• Exhaustive search extremely inefficient• Basic idea of Hough transform (HT): Change

problem from complicated pattern detection to peak finding in parameter space of the shape– Each pixel can lie on a family of

possible shapes (e.g., for lines, the pencil of lines through that point)

– Shapes with more pixels on them have more evidence that they are present in the image

– Thus every pixel “votes” for a set of shapes and the one(s) with the most votes “win”—i.e., exist

courtesy of Massey U.

Page 16: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

HT for Line Finding

• Parametrize lines by distance from origin and angle (r, µ)

• Every point (r, µ) in “line space” is a unique line

• The set of image points f(x, y)g on a particular line is expressed by:

• Problem with slope-intercept representation (m, n) is that it can’t handle vertical lines

Page 17: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

HT for Line Finding

• Fixing an image pixel (xi, yi) yields a set of points in line space f(r, µ)g corresponding to a sinusoidal curve described by

• Each point on curve in line space is a member of the pencil of lines through the pixel

• Collinear points yield curves that intersect in a single point

courtesy of R. Bock

Page 18: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

HT for Line Finding: Algorithm

• Set up T x R accumulator array A quantizing line space– Range: r 2 [0, max(W, H)], µ 2 [0, ¼ ]– Bin size: Reasonable intervals– Initial value: 0 for all bins

• For every image pixel (x, y) that is a feature/edge/etc., iterate over h 2 [1, T] (µ (h) is line angle for

row h of A)– Let r = x cos µ (h) + y sin µ (h) – Find index k of the A column closest to r– Increment A(h, k) by one

• Find all local maxima of A

Graphsinusoid

Page 19: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

HT for Line Finding: Algorithm

• Set up T x R accumulator array A quantizing line space– Range: r 2 [0, max(W, H)], µ 2 [0, ¼ ]– Bin size: Reasonable intervals– Initial value: 0 for all bins

• For every image pixel (x, y) that is a feature/edge/etc., iterate over h 2 [1, T] (µ (h) is line angle for

row h of A)– Let r = x cos µ (h) + y sin µ (h) – Find index k of the A column closest to r– Increment A(h, k) by one

• Find all local maxima of A

Graphsinusoid

Page 20: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Example: HT for Line Finding

Edge-detectedimage

Accumulatorarray

“De-Hough” of lines ¸ 70% of max

courtesy of Massey U.

Page 21: Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Hough Transform: Issues

• Noise– Points slightly off curve result in multiple

intersections– Can use larger bins, smooth accumulator array

• Non-maximum suppression a good idea to get unique peaks

• Dimensionality– Exponential increase in size of accumulator array as

number of shape parameters goes up– HT works best for shapes with 3 or fewer variables


Recommended