+ All Categories
Home > Documents > Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition...

Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition...

Date post: 25-Dec-2015
Category:
Upload: ernest-anthony
View: 219 times
Download: 3 times
Share this document with a friend
41
SEGMENT-BASED STEREO MATCHING USING GRAPH CUTS Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong and George Chen Advanced System Technology San Diego Lab,STMicroelectronics, Inc. Guan-Yu Liu
Transcript
Page 1: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

1

SEGMENT-BASED STEREO MATCHING USING GRAPH CUTS

Proceedings of the 2004 IEEE Computer Society Conference

on Computer Vision and Pattern Recognition (CVPR’04)

1063-6919/04 $20.00 c 2004 IEEE

Li Hong and George Chen

Advanced System Technology San Diego Lab,STMicroelectronics, Inc.

Guan-Yu Liu

Page 2: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

2

Outline Introduction Overview Method

A : Color segmentation B : Disparity plane estimation C : Disparity plane labeling by graph cuts

Experimental Results Conclusion Q & A

Page 3: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

3

Introduction(1/3)

stereo algorithms can be categorized into two major classes The first class is local (window-based) algorithms, where the

disparity at a given pixel depends only on intensity values within a finite neighboring window.

The second class is global algorithms, which make explicit smoothness assumptions of the disparity map and solve it through various minimization techniques.

Page 4: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

4

Introduction(2/3)

Local methods can easily capture accurate disparity in highly textured regions, however they often tend to produce noisy disparities in textureless regions, blur the disparity discontinuous boundaries and fail at occluded areas.

The stereo matching problem is solved through minimizing this global image similarity energy.

Page 5: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

5

Introduction(3/3)

Color segment representation is used to reduce the high solution space and enforce disparity smoothness in homogeneous color regions.

A weighted graph is then constructed in which graph nodes represent image pixels, graph label set and graph edge weights correspond to the defined energy terms.

Energy function Data term Smoothness term

Page 6: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

6

Method

A : Color segmentation

B : Disparity plane estimate 1. Local matching in pixel domain 2. Initial plane fitting from single segment 3. Refined plane fitting from grouped segments

C : Disparity plane labeling by graph cuts

Page 7: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

7

Method.A(1/1)

The approach is built upon the assumption that large disparity discontinuities only occur on the boundaries of homogeneous color segments.

Therefore any color segmentation algorithm that decomposes an image into homogeneous color regions will work.

In this paper, mean-shift color segmentation algorithm [3] is used.

[3] D. Comaniciu and P. Meer, “Robust Analysis of Feature Spaces: Color Image Segmentation,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 750-755, 1997

Page 8: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

8

Method

A : Color segmentation

B : Disparity plane estimate 1. Local matching in pixel domain 2. Initial plane fitting from single segment 3. Refined plane fitting from grouped segments

C : Disparity plane labeling by graph cuts

Page 9: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

9

Method.B1(1/1)

In a standard rectified stereo setup, the correspondence between a pixel (x, y) in the reference image I and a pixel (x’, y’) in the matching image J is given by: x’ = x + d(x, y), y’ = y, where the disparity d(x, y) can take any discrete value from the displacement interval [dmin, dmax].

123

Page 10: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

10

Method.B2(1/2)

A plane is used to model the continuous disparity of each segment, i.e., where are the plane parameters and d is the corresponding disparity of the image pixel (x, y). ( ) is the least square solution of a linear system 123

After the initial plane fitting, an iterative process is adopted to update the plane. In each iteration, the pixel disparity is changed within a given range of the fitted plane and the plane parameters are updated based on the modified disparities accordingly.

Page 11: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

11

Method.B2(2/2)

We detect outliers through a simple crosscheck method. Let pixel (x’, y’) be the correspondence of pixel (x, y) If d(x, y) ≠ d(x’, y’), we consider pixel (x, y) as an outlier.

Weighted least square scheme is adopted in the iteration process. 123 123

Very small segments are skipped as they lack sufficient data to provide reliable plane estimations.

Page 12: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

12

Method.B3(1/4)

The purpose is not to find the best plane for each segment but rather extract all possible planes for the image. Therefore, it is crucial to extract a set of disparity planes that accurately represent the scene structure.

Step 1. Measure segment matching cost for each plane in the disparity plane set. 2. Assign each segment the plane ID that gives the minimum matching cost. 3. Group neighboring segments with the same plane ID. 4. Apply the plane fitting process mentioned in method.B2 to each grouped

segment.

Page 13: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

13

Method.B3(2/4)

It is natural to compute it as the sum of the matching cost from each single pixel inside the segment, i.e.,

123

where S is a segment, P is a disparity plane, and d =

However, there are several problems associated with this approach. Occluded pixels would easily bias this segment matching cost.

Page 14: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

14

Method.B3(3/4)

They propose two remedies. exclude all possible occluded pixels in computing the segment

matching cost. augment the sum of pixel matching cost by the percentage of non-

supporting pixels to the disparity plane.

They consider only textured outliers as possibly occluded pixels.

Page 15: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

15

Method.B3(4/4)

Let n be the number of non-occluded pixels in a segment S, and let s be the number of supporting pixels to a disparity plane P in segment S. We define the segment matching cost as follows: 123

where O represents the occluded portion in S.

Page 16: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

16

Method

A : Color segmentation

B : Disparity plane estimate 1. Local matching in pixel domain 2. Initial plane fitting from single segment 3. Refined plane fitting from grouped segments

C : Disparity plane labeling by graph cuts

Page 17: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

17

Method.C(1/3)

They describe in details the formalization of the stereo matching as an energy minimization problem in the segment domain and its solution, i.e., labeling each segment with its corresponding disparity plane by graph cuts.

Let R be the color segments of the reference image, D be the estimated disparity plane set. The goal is to find a labeling f that assigns each segment S R ∈ a corresponding plane f(S) ∈D, where f is both piecewise smooth and consistent with the observed data. 123

[11] V. Kolmogorov and R. Zabih, “Computing Visual Correspondence with Occlusions using Graph Cuts,” Proc. Int’l Conf. Computer Vision 2001.

Page 18: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

18

Method.C(2/3)

123

123

123

where S and S’ are neighboring segments, uS,S is propor-tional to the common border length between segment S and S’, if f(S) ≠ f(S’) , otherwise 0.

Page 19: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

19

Method.C(3/3)

The solution converges usually within 2-3 iterations.

Page 20: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

20

Experimental Results(1/4)

Page 21: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

21

Experimental Results(2/4)

Page 22: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

22

Experimental Results(3/4)

Page 23: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

23

Experimental Results(4/4)

Page 24: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

24

Conclusions

The segment-based approach works well for images with sharp color discontinuities and slanted disparity surfaces.

The current version of our algorithm will not be able to handle the situation if there are disparity boundaries appearing inside the initial color segments.

Page 25: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

25

A NEW SEGMENT-BASED STEREO MATCHING USING GRAPH CUTS

Computer Science and Information Technology (ICCSIT), 2010 3rd IEEE International Conference

on Volume: 5

National University of SingaporeDaolei Wang and Kah Bin Lim

Guan-Yu Liu

Page 26: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

26

Flow Chart

Page 27: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

27

Initial Disparity(1/2)

In our paper, initial disparity is obtained by the local matching approach, which is the method of Sum of the Weighted Absolute intensity Differences (SWAD).

Each pixel is assigned a weight w(i, j, d), the value of which results from the 2D Gaussian function of the pixel's Euclidean distance from the central pixel.

123

Where dg is the pixel's Euclidean distance from the central pixel, Tg is the constant parameter.

Page 28: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

28

Initial Disparity(2/2)

123

Where the N(i,j) is 5 x 5 surrounding window at the center of the window(i,j).

Page 29: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

29

Plane Fitting(1/5)

123 Where the Uh row of A is [Xi, Yi, 1], the Uh element in B is d(Xi,Yi) .

Here we use Singular Value Decomposition (SVD) for least square solution. 123

Where A + is the pseudoinverse of A, the A + can be computed by SVD. Using psedoinverse A+ ,which compute through SVD, irrespective of A being singular or not.

Page 30: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

30

Plane Fitting(2/5)

The cross checking is adopted to get the reliable pixel and filter out occluded pixels and area of the low texture where disparity estimates tend to be unreliable.

Let the DL is the disparity set from left image to right image and the DR is the disparity set from right image to left image. 123

Page 31: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

31

Plane Fitting(3/5)

We build a rule to judge the reliable region or unreliable region, the regularity as follow:

123

Where PI is the ratio between the number of the unreliable pixel in the same segment and the number of the segment's pixel.

If a segment satisfies the above Eq, then the all pixels in the segment are labeled by unreliable.

Page 32: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

32

Plane Fitting(4/5)

After the above steps filter outliers, we measure the distance between previous disparity to the computed disparity plane, the rule as follow: 123

Page 33: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

33

Plane Fitting(5/5)

For the above steps filter outliers, the process of estimation disparity parameters algorithm is then iterated until

is threshold the convergence value of the iterative (typically 0.99).

Page 34: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

34

Neighboring Segment Merging(1/2)

Given two segment regions A and B randomly, the plane equations are given by: 123 123

Then, we can decide whether the two planes are the same or not from two conditions.

The angle

The distance

Page 35: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

35

Neighboring Segment Merging(2/2)

We use Gaussian function in two conditions, so the similarity measures as follow function:

123

If C > , where A is constant threshold, then we consider two regions are the same and we merge two segments.

Page 36: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

36

Energy Function(1/2)

123

123 , Where woec is the penalty coefficient,

is the pixel number of detected occlusions which include unreliable pixels in the s segment.

123 Where SN represents

a set of all adjacent segments and Si, Sj are neighboring segments, Sdisc(Si, Sj) is a discontinuity penalty that incorporates the common border lengths and the mean color similarity as proposed in [6].

[6] M. Gong, R. Yang, W. Liang, and M. Gong. "A perfonnance study on different cost aggregation approaches used in real-time stereo matching.“ Int. Jour. Computer Vision, 75(2): 283-296, 2007.

Page 37: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

37

Energy Function(2/2)

The solution converges usually with 2-5 iterations. In addition, it is extremely insensitive to the initial labeling.

Page 38: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

38

Experimental Results(1/2)

Page 39: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

39

Experimental Results(1/2)

Page 40: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

40

Conclusions

The algorithm permits us to obtain the high quality dense disparity map of a scene from its initial disparity estimation.

The good points in this paper have three contributions, namely; robust disparity plane fitting, improving Hierarchical clustering algorithm to merge segment and using graph cuts optimization to the new energy function.

The Mean-shift method is a timeconsuming image segmentation algorithm.

Page 41: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04) 1063-6919/04 $20.00 c 2004 IEEE 1 Li Hong.

41

Q & A


Recommended