+ All Categories
Home > Education > iccv2009 tutorial: boosting and random forest - part III

iccv2009 tutorial: boosting and random forest - part III

Date post: 27-Jan-2015
Category:
Upload: zukun
View: 106 times
Download: 1 times
Share this document with a friend
Description:
 
Popular Tags:
53
1 Copyright 2008, Toshiba Corporation. Björn Stenger 28 Sep 2009 2009 京京 Tutorial – Part 3 Tracking Using Classification and Online Learning
Transcript
Page 1: iccv2009 tutorial: boosting and random forest - part III

1Copyright 2008, Toshiba Corporation.

Björn Stenger

28 Sep 2009

2009 京都

Tutorial – Part 3

Tracking Using Classificationand Online Learning

Page 2: iccv2009 tutorial: boosting and random forest - part III

2

Roadmap

Tracking by classificationTracking by classification

On-line BoostingOn-line BoostingMultiple Instance LearningMultiple Instance Learning

Multi-Classifier Boosting

Online Feature SelectionOnline Feature Selection

Adaptive TreesAdaptive Trees

Ensemble TrackingEnsemble Tracking

Online Random ForestOnline Random Forest

Combining off-line & on-lineCombining off-line & on-line

Tracking by optimizationTracking by optimization

Page 3: iccv2009 tutorial: boosting and random forest - part III

3

Tracking by OptimizationExample: Mean shift tracking

Given: target location in frame t, color distribution

In frame t+1:Minimize distance

p candidate distributionq target distributiony location

Mean shift: iterative optimizationFinds local optimum

Extension: downweight by background

[Comaniciu et al. 00]

Page 4: iccv2009 tutorial: boosting and random forest - part III

4

Support Vector Tracking [Avidan 01]

Combines SVM classifier with optical flow-based tracking

Input:- Initial guess x of object location in frame t- SVM classifier (trained on ~10,000 example images)

Algorithm:Maximize SVM classification score

SVM eqn Motion eqn (1st order Taylor) Results in this task:

Use first order Taylor approximation, obtain linear system

Prior knowledge of classifier is used in tracking process, no online update!

Page 5: iccv2009 tutorial: boosting and random forest - part III

7

Online Selection of Discriminative Features [Collins et al. 03]

Select features that best discriminate between object and background

Feature pool:

Discriminative score: measure separability (variance ratio) of fg/bg

Within class variance should be small

Total variance should be large

Page 6: iccv2009 tutorial: boosting and random forest - part III

8

On-line Feature Selection (2)

Input image:Feature ranking according to variance ratio

[Collins et al. 03]

Mean shift

Mean shift

Mean shift

Median

New location

Combining estimates

Page 7: iccv2009 tutorial: boosting and random forest - part III

9

Ensemble Tracking [Avidan 05]

Use classifiers to distinguish object from background

Image Feature space

foregroundbackground

First location is provided manuallyAll pixels are training datalabeled {+1,-1}

11-dimensional feature vector

8 orientation histogram of 5x5 nhood3 RGB values

Page 8: iccv2009 tutorial: boosting and random forest - part III

10

Ensemble Tracking [Avidan 05]

Confidence map

Train T (=5) weak linear classifiers h:

Combine into strongClassifier with AdaBoost

Build confidence map from classifier margins Scale positive margin to [0,1]

Mean shift

Find the mode using mean shift

Feature space

foreground

background

Page 9: iccv2009 tutorial: boosting and random forest - part III

11

Ensemble Tracking Update [Avidan 05]

Test examples xi using strong classifier H(x)

For each new frame Ij

Run mean shift on confidence map

Obtain new pixel labels y

Keep K (=4) best (lowest error) weak classifiersUpdate their weights

Train T-K (=1) new weak classifiers

h1 h2 h3 h4 h5

Page 10: iccv2009 tutorial: boosting and random forest - part III

13

AdaBoost (recap) [Freund, Schapire 97]

Input

- Set of labeled training samples

- Weight distribution over samples

for n=1 to N // number of weak classifiers

- train a weak classifier using samples and weight distribution

- calculate error- calculate classifier weight

- update sample weights

end

Algorithm

Result

[slide credit H. Grabner]

Feature space

Page 11: iccv2009 tutorial: boosting and random forest - part III

15

From Off-line to On-line Boosting [Oza, Russel 01]

Input Input- set of labeled training samples

- weight distribution over samples

- ONE labeled training sample

- strong classifier to update

- initial sample importance

For n=1 to N- train a weak classifier using samples and weight distribution

- calculate error

- calculate confidence

- update weight distribution

End

For n=1 to N

- update weak classifier using samples and importance

- update error estimate

- update confidence

- update importance

End

Algorithm Algorithm

Off-line On-line

[slide credit H. Grabner]

Page 12: iccv2009 tutorial: boosting and random forest - part III

16

Online Boosting [Oza, Russell 01]

Input

- ONE labeled training sample

- strong classifier

for n=1 to N // number of weak classifiers

- update weak classifier using sample and importance

- update error estimate- update classifier weight

- update sample importance

end

Algorithm

Result

Feature space

- initial sample importance

[slide credit H. Grabner]

Page 13: iccv2009 tutorial: boosting and random forest - part III

19

Priming can help [Oza 01]

Batch learning on first 200 points, then online

Page 14: iccv2009 tutorial: boosting and random forest - part III

20

Online Boosting for Feature Selection [Grabner, Bischof 06]

Each feature corresponds to a weak classifier Combination of simple features

Page 15: iccv2009 tutorial: boosting and random forest - part III

21

Selectors [Grabner, Bischof 06]

A selector chooses one feature/classifier from pool.

Selectors can be seen as classifiersClassifier pool

Idea: Perform boosting on selectors, not the features directly.

Page 16: iccv2009 tutorial: boosting and random forest - part III

22

Online Feature Selection

one sample

Init importance

Estimate errors

Select best weak classifier

Update weight

Estimate importance

Current strong classifier

For each training sample

[Grabner, Bischof 06]

Global classifier pool

Estimate errors

Select best weak classifier

Update weight

Estimate errors

Select best weak classifier

Update weight

Estimate importance

Page 17: iccv2009 tutorial: boosting and random forest - part III

23

Tracking Principle [Grabner, Bischof 06]

[slide credit H. Grabner]

Page 18: iccv2009 tutorial: boosting and random forest - part III

24

Adaptive Tracking [Grabner, Bischof 06]

Page 19: iccv2009 tutorial: boosting and random forest - part III

25

Limitations [Grabner, Bischof 06]

Page 20: iccv2009 tutorial: boosting and random forest - part III

26

Multiple Instance Learning (MIL)

Precisely labeled data is expensiveWeakly labeled data is easier to collect

Algorithm for allowing ambiguity in training data:Get bag of (data, label) pairs

Bag is positive if one or more of its members is positive.

[Keeler et al. 90, Dietterich et al. 97, Viola et al. 05]

Page 21: iccv2009 tutorial: boosting and random forest - part III

27

Multiple Instance Learning

Classifier

Supervised learning training input

MILClassifier

MIL training input

[Babenko et al. 09]

Page 22: iccv2009 tutorial: boosting and random forest - part III

28

Online MIL Boost [Babenko et al. 09]

At time t get more training data1 Update all candidate classifiers2 Pick best K in a greedy fashion

pool of weak classifier candidates

Page 23: iccv2009 tutorial: boosting and random forest - part III

29

Online MIL Boost

Frame t Frame t+1

Get data (bags)

Update all classifiersin pool

Greedily add best K tostrong classifier

[Babenko et al. 09]

Page 24: iccv2009 tutorial: boosting and random forest - part III

30

Tracking Results [Babenko et al. 09]

Page 25: iccv2009 tutorial: boosting and random forest - part III

31

On-line / Off-line Spectrum

Tracking

DetectionGeneral object/any background detector Fixed training set

Object/Background classifier On-line update

Adaptive detector

Tracking with prior

c/f Template Update Problem [Matthews et al. 04]

Example strategies: Run detector in tandem to verify [Williams et al. 03] Include generative model [Woodley et al. 06][Grabner et al. 07] Integrate tracker and detector [Okuma et al. 04][Li et al. 07]

Page 26: iccv2009 tutorial: boosting and random forest - part III

32

Semi-supervised

Use labeled data as priorEstimate labels & sample importance for unlabeled data

[Grabner et al. 08]

Page 27: iccv2009 tutorial: boosting and random forest - part III

33

Tracking Results [Grabner et al. 08]

Page 28: iccv2009 tutorial: boosting and random forest - part III

35

Beyond Semi-Supervised [Stalder et al. 09]

RecognizerObject specific“Adaptive prior”Updated by:pos: Tracked samples validated by detectorneg: Background during detection

“too inflexible”

Page 29: iccv2009 tutorial: boosting and random forest - part III

36

Results [Stalder et al. 09]

Page 30: iccv2009 tutorial: boosting and random forest - part III

38

Task: Tracking a Fist

Page 31: iccv2009 tutorial: boosting and random forest - part III

39

Learning to Track with Multiple Observers

Observation Models

Off-line trainingof observer combinations

Optimal trackerfor task at hand

Labeled TrainingData

Idea: Learn optimal combination of observers (trackers) in an off-line training stage. Each tracker can be fixed or adaptive.

Given: labeled training data, object detector

[Stenger et al. 09]

Page 32: iccv2009 tutorial: boosting and random forest - part III

40

Input: Set of observers

Single template

[NCC]

[SAD]

Normalized cross-correlation

Sum of absolute differences

Local features

[BOF]

[KLT]

[FF]

[RT]

Block-based optical flow

Kanade-Lucas-Tomasi

Flocks of features

Randomized templates

Histogram [MS]

[C]

[M]

[CM]

Color-based mean shift

Color probability

Motion probability

Color and motion probability

On-line classifiers

[OB]

[LDA]

[BLDA]

[OFS]

On-line boosting

Linear Discriminant Analysis (LDA)

Boosted LDA

On-line feature selection

Each returns a location estimate & confidence value

[Stenger et al. 09]

Page 33: iccv2009 tutorial: boosting and random forest - part III

41

Combination Schemes

Find good combinations of observers automatically by evaluating all pairs/triplets (using 2 different schemes).

1)

2)

[Stenger et al. 09]

Page 34: iccv2009 tutorial: boosting and random forest - part III

42

How to Measure Performance?

Run each tracker on all frames (don’t stop after first failure)

Measure position errorLoss of track when error above thresholdRe-init with detector

[Stenger et al. 09]

Page 35: iccv2009 tutorial: boosting and random forest - part III

44

Results on Hand Data

Single observers

Pairs of observers

[Stenger et al. 09]

Page 36: iccv2009 tutorial: boosting and random forest - part III

45

Tracking Results [Stenger et al. 09]

Page 37: iccv2009 tutorial: boosting and random forest - part III

46

Face Tracking Results [Stenger et al. 09]

Page 38: iccv2009 tutorial: boosting and random forest - part III

47

Multi-Classifier Boosting

Simultaneously learn image clusters and classifiers

[Kim et al. 09]

AdaBoost Multi-class boosting with gating function

Page 39: iccv2009 tutorial: boosting and random forest - part III

48

Online Multi-Class Boosting [Kim et al. 09]

Handles multiple poses: take maximum classifier response

Page 40: iccv2009 tutorial: boosting and random forest - part III

49

And now

TreesTrees

Page 41: iccv2009 tutorial: boosting and random forest - part III

50

Online Adaptive Decision Trees [Basak 04]

Sigmoidal soft partitioning function at each node

hyperplane

Activation value at node i

Complete binary trees, tree structure is maintained, each class = subset of leaves, label leaf nodes beforehand

For each training sample, adapt decision hyperplanes at all inner nodes via gradient descent on error measure (leaf node activation)

Page 42: iccv2009 tutorial: boosting and random forest - part III

51

Adaptive Vocabulary Forests [Yeh et al. 07]

Application: Efficient indexing, leafs represent visual words Batch learning: hierarchical k-means, cf. [Nister and Stewenius 06]

[slide credit T. Yeh]

Page 43: iccv2009 tutorial: boosting and random forest - part III

52

Incremental Building of Vocabulary Tree [Yeh et al. 07]

Page 44: iccv2009 tutorial: boosting and random forest - part III

53

Tree Growing by Splitting Leaf Nodes [Yeh et al. 07]

Page 45: iccv2009 tutorial: boosting and random forest - part III

54

Tree Adaptation with Re-Clustering [Yeh et al. 07]

Identify affected neighborhood

Remove exisiting boundaries

Re-Cluster points

Page 46: iccv2009 tutorial: boosting and random forest - part III

55

Accuracy drops when Adaptation is stopped [Yeh et al. 07]

Recent accuracy

T=100

R(j) = 1

if top ranked retrieved image belongs to same group

Page 47: iccv2009 tutorial: boosting and random forest - part III

56

[Yeh et al. 07]Tree Pruning

Limit the number of leaf nodes

Keep record of inactivity period at each node, if limit reached, remove nodes with least-recently used access

Allows for restructuring of heavily populated areas

Page 48: iccv2009 tutorial: boosting and random forest - part III

57

On-line Random Forests [Saffari et al. 09]

For each tree t

Input: New training example

Update tree t with k times

Estimate Out-of-bag error

end

P(Discard tree t and insert new one) =

Random forest

Page 49: iccv2009 tutorial: boosting and random forest - part III

58

Leaf Update and Split [Saffari et al. 09]

Set of random split functions

Split node when1) Number of samples in node > threshold 12) Gain of best split > threshold 2

class k

Compute gain of each potential split function

Current leaf node

Page 50: iccv2009 tutorial: boosting and random forest - part III

59

Results [Saffari et al. 09]

Convergence of on-line RF classification to batch solution on USPS data set

Tracking error of online RF compared to online boosting

Page 51: iccv2009 tutorial: boosting and random forest - part III

60

Conclusions

On-line versions exist for Boosting and Random ForestsOn-line versions exist for Boosting and Random Forests

Experimentally good convergence results (but few Experimentally good convergence results (but few theoretical guarantees)theoretical guarantees)

Useful for Tracking via ClassificationUseful for Tracking via Classification

A lot of code has been made available online by authorsA lot of code has been made available online by authors

Detection – Tracking SpectrumDetection – Tracking Spectrum

Adaptation vs. DriftAdaptation vs. Drift

Page 52: iccv2009 tutorial: boosting and random forest - part III

61

ReferencesAvidan, S.,Support Vector Tracking, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, 2001.

Avidan, S.,Support Vector Tracking ,IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 26(8), pp. 1064--1072, 2004.

Avidan, S.,Ensemble Tracking,IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 29(2), pp 261-271, 2007.

Avidan, S.,Ensemble Tracking,IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, USA, 2005.

Babenko, B., Yang, M.-H., Belongie, S.,Visual Tracking with Online Multiple Instance Learning,Proc. CVPR 2009.

Basak, J.,Online adaptive decision trees, Neural Computation, v.16 n.9, p.1959-1981, September 2004.

Collins, R. T., Liu, Y., Leordeanu, M.,On-Line Selection of Discriminative Tracking Features,IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), Vol 27(10), October 2005, pp.1631-1643.

Collins, R. T., Liu, Y.,On-Line Selection of Discriminative Tracking Features,Proceedings of the 2003 International Conference of Computer Vision (ICCV '03), October, 2003, pp. 346 - 352.

Comaniciu, D., Ramesh, V., Meer, P.,Kernel-Based Object Tracking, IEEE Trans. Pattern Analysis Machine Intell., Vol. 25, No. 5, 564-575, 2003.

Comaniciu, D., Ramesh, V., Meer P.,Real-Time Tracking of Non-Rigid Objects using Mean Shift, IEEE Conf. Computer Vision and Pattern Recognition, Hilton Head Island, South Carolina, Vol. 2, 142-149, 2000.

T. G. Dietterich and R. H. Lathrop and T. Lozano-Perez,Solving the multiple instance problem with axis-parallel rectangles.Artificial Intelligence 89 31-71, 1997.

Freund, Y. , Schapire, R. E. ,A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997.

H. Grabner, C. Leistner, and H. Bischof,Semi-supervised On-line Boosting for Robust Tracking.In Proceedings European Conference on Computer Vision (ECCV), 2008.

H. Grabner, P. M. Roth, H. Bischof,Eigenboosting: Combining Discriminative and Generative Information, IEEE Conference on Computer Vision and Pattern Recognition, 2007.

H. Grabner, M. Grabner, and H. Bischof,Real-time Tracking via On-line Boosting,In Proceedings British Machine Vision Conference (BMVC), volume 1, pages 47-56, 2006.

H. Grabner, and H. Bischof,On-line Boosting and Vision,In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 260-267, 2006.

J. D. Keeler , D. E. Rumelhart , W.-K. Leow, Integrated segmentation and recognition of hand-printed numerals, Proc. 1990 NIPS 3, p.557-563, October 1990, Denver, Colorado, USA.

T.-K. Kim and R. Cipolla, MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features, In Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, Dec. 2008.

T-K. Kim, T. Woodley, B. Stenger, R. Cipolla, Online Multiple Classifier Boosting for Object Tracking, CUED/F-INFENG/TR631, Department of Engineering, University of Cambridge, June 2009.

Page 53: iccv2009 tutorial: boosting and random forest - part III

62

Y. Li, H. Ai, S. Lao, M. Kawade, Tracking in Low Frame Rate Video: A Cascade Particle Filter with Discriminative Observers of Different Lifespans, Proc. CVPR, 2007.

I. Matthews, T. Ishikawa, and S. Baker,The template update problem. In Proc. BMVC, 2003

I. Matthews, T. Ishikawa, and S. Baker,The Template Update Problem,IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 6, June, 2004, pp. 810 - 815.

K. Okuma, A. Taleghani, N. De Freitas, J. Little, D. G. Lowe, A Boosted Particle Filter: Multitarget Detection and Tracking ,European Conference on Computer Vision(ECCV), May 2004.

Oza, N.C.,Online Ensemble Learning, Ph.D. thesis, University of California, Berkeley.

Oza, N.C. and Russell, S., Online Bagging and Boosting.In Eighth Int. Workshop on Artificial Intelligence and Statistics, pp. 105–112, Key West, FL, USA, January 2001.

Oza, N.C. and Russell, S., Experimental Comparisons of Online and Batch Versions of Bagging and Boosting, The Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, 2001.

Saffari, A., Leistner C., Santner J., Godec M., Bischof H.,On-line Random Forests, 3rd IEEE ICCV Workshop on On-line Computer Vision, 2009.

S. Stalder, H. Grabner, and L. Van Gool,Beyond Semi-Supervised Tracking: Tracking Should Be as Simple as Detection, but not Simpler than Recognition.In Proceedings ICCV’09 WS on On-line Learning for Computer Vision, 2009.

B. Stenger, T. Woodley, R. Cipolla,Learning to Track With Multiple Observers.Proc. CVPR, Miami, June 2009.

References & CodeP. A. Viola and J. Platt and C. Zhang,Multiple instance boosting for object detection,Proceedings of NIPS 2005.

O. Williams, A. Blake, and R. Cipolla, Sparse Bayesian Regression for Efficient Visual Tracking, in IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, August 2005.

O. Williams, A. Blake, and R. Cipolla, A Sparse Probabilistic Learning Algorithm for Real-Time Tracking,in Proceedings of the Ninth IEEE International Conference on Computer Vision, October 2003.

T. Woodley, B. Stenger, R. Cipolla,Tracking Using Online Feature Selection and a Local Generative Model,Proc. BMVC, Warwick, September 2007.

T. Yeh, J. Lee, and T. Darrell,Adaptive Vocabulary Forests for Dynamic Indexing and Category Learning.Proc. ICCV 2007.

Code:Severin Stalder, Helmut GrabnerOnline Boosting, Semi-supervised Online Boosting, Beyond Semi-Supervised Online Boostinghttp://www.vision.ee.ethz.ch/boostingTrackers/index.htm

Boris BabenkoMIL Trackhttp://vision.ucsd.edu/~bbabenko/project_miltrack.shtml

Amir Saffarihttp://www.ymer.org/amir/software/online-random-forests/


Recommended