+ All Categories
Home > Documents > Research Article Object Tracking via 2DPCA and...

Research Article Object Tracking via 2DPCA and...

Date post: 13-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
8
Research Article Object Tracking via 2DPCA and 2 -Regularization Haijun Wang, 1,2 Hongjuan Ge, 1 and Shengyan Zhang 2 1 College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China 2 Aviation Information Technology R & D Center, Binzhou University, Binzhou 256603, China Correspondence should be addressed to Haijun Wang; [email protected] Received 10 March 2016; Accepted 13 July 2016 Academic Editor: Jiri Jan Copyright © 2016 Haijun Wang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present a fast and robust object tracking algorithm by using 2DPCA and 2 -regularization in a Bayesian inference framework. Firstly, we model the challenging appearance of the tracked object using 2DPCA bases, which exploit the strength of subspace representation. Secondly, we adopt the 2 -regularization to solve the proposed presentation model and remove the trivial templates from the sparse tracking method which can provide a more fast tracking performance. Finally, we present a novel likelihood function that considers the reconstruction error, which is concluded from the orthogonal leſt-projection matrix and the orthogonal right-projection matrix. Experimental results on several challenging image sequences demonstrate that the proposed method can achieve more favorable performance against state-of-the-art tracking algorithms. 1. Introduction Visual tracking is one of the fundamental topics in computer vision and plays an important role in numerous researches and practical applications such as surveillance, human com- puter interaction, robotics, and traffic control. Existing object tracking algorithms can be divided into two categories, that is, discriminative or generative. Discriminative methods treat tracking as a binary classification problem with local search which estimates the decision boundary between an object image patch and the background. Babenko et al. [1] proposed an online multiple instance learning (MIL), which treats ambiguous positive and negative samples as bags to learn a discriminative classifier. Zhang et al. [2] propose a fasting compressive tracking algorithm which employs nonadaptive random projections that preserve the structure of the image feature. Generative methods typically learn a model to represent the target object and incrementally update the appearance model to search for the image region with minimal recon- struction error. Inspired by the success of sparse representa- tion in face recognition [3], supersolution [4], and inpainting [5], recently, sparse representation based visual tracking [6– 9] has also attracted increasing interests. Mei and Ling [10] first extend sparse representation to object tracking and cast the tracking problem as determining the likeliest patch with a sparse representation of templates. e method can handle partial occlusion by treating the error term as sparse noise. However, it requires solving a series of complicated 1 norm related minimization problems many times and the time complexity is quite significant. Although some modified 1 norm methods have been proposed to speed up tracker, they are still far away from being real time. Recently, many object tracking algorithms have been pro- posed to exploit the power of subspace representation from different points. Ross et al. [11] present a tracking method that incrementally learns a PCA low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. However, this method is sensitive to partial occlusion. Zhong et al. [8] proposed a robust object tracking algorithm via sparse collaborative appearance model that exploits both holistic templates and local representations to account for appearance changes. Zhuang et al. [12] cast the tracking problem as finding the candidate that scores the highest in the evaluation model based upon a matrix called discriminative sparse similarity map. Qian et al. [13] exploit an appearance model based on extended incremental non- negative matrix factorization for visual tracking. Wang and Lu [14] present a novel online object tracking algorithm by using 2DPCA and 1 -regularization. is method can achieve Hindawi Publishing Corporation Journal of Electrical and Computer Engineering Volume 2016, Article ID 7975951, 7 pages http://dx.doi.org/10.1155/2016/7975951
Transcript
Page 1: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

Research ArticleObject Tracking via 2DPCA and ℓ

2-Regularization

Haijun Wang12 Hongjuan Ge1 and Shengyan Zhang2

1College of Civil Aviation Nanjing University of Aeronautics and Astronautics Nanjing 211106 China2Aviation Information Technology R amp D Center Binzhou University Binzhou 256603 China

Correspondence should be addressed to Haijun Wang whjlym163com

Received 10 March 2016 Accepted 13 July 2016

Academic Editor Jiri Jan

Copyright copy 2016 Haijun Wang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

We present a fast and robust object tracking algorithm by using 2DPCA and ℓ2-regularization in a Bayesian inference framework

Firstly we model the challenging appearance of the tracked object using 2DPCA bases which exploit the strength of subspacerepresentation Secondly we adopt the ℓ

2-regularization to solve the proposed presentation model and remove the trivial templates

from the sparse tracking method which can provide a more fast tracking performance Finally we present a novel likelihoodfunction that considers the reconstruction error which is concluded from the orthogonal left-projectionmatrix and the orthogonalright-projection matrix Experimental results on several challenging image sequences demonstrate that the proposed method canachieve more favorable performance against state-of-the-art tracking algorithms

1 Introduction

Visual tracking is one of the fundamental topics in computervision and plays an important role in numerous researchesand practical applications such as surveillance human com-puter interaction robotics and traffic control Existing objecttracking algorithms can be divided into two categories thatis discriminative or generative Discriminativemethods treattracking as a binary classification problem with local searchwhich estimates the decision boundary between an objectimage patch and the background Babenko et al [1] proposedan online multiple instance learning (MIL) which treatsambiguous positive and negative samples as bags to learn adiscriminative classifier Zhang et al [2] propose a fastingcompressive tracking algorithm which employs nonadaptiverandom projections that preserve the structure of the imagefeature

Generative methods typically learn a model to representthe target object and incrementally update the appearancemodel to search for the image region with minimal recon-struction error Inspired by the success of sparse representa-tion in face recognition [3] supersolution [4] and inpainting[5] recently sparse representation based visual tracking [6ndash9] has also attracted increasing interests Mei and Ling [10]first extend sparse representation to object tracking and cast

the tracking problem as determining the likeliest patch witha sparse representation of templates The method can handlepartial occlusion by treating the error term as sparse noiseHowever it requires solving a series of complicated ℓ

1norm

related minimization problems many times and the timecomplexity is quite significant Although some modified ℓ

1

norm methods have been proposed to speed up tracker theyare still far away from being real time

Recently many object tracking algorithms have been pro-posed to exploit the power of subspace representation fromdifferent points Ross et al [11] present a tracking methodthat incrementally learns a PCA low-dimensional subspacerepresentation efficiently adapting online to changes in theappearance of the target However this method is sensitiveto partial occlusion Zhong et al [8] proposed a robust objecttracking algorithm via sparse collaborative appearancemodelthat exploits both holistic templates and local representationsto account for appearance changes Zhuang et al [12] castthe tracking problem as finding the candidate that scores thehighest in the evaluation model based upon a matrix calleddiscriminative sparse similarity map Qian et al [13] exploitan appearance model based on extended incremental non-negative matrix factorization for visual tracking Wang andLu [14] present a novel online object tracking algorithm byusing 2DPCAand ℓ

1-regularizationThismethod can achieve

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7975951 7 pageshttpdxdoiorg10115520167975951

2 Journal of Electrical and Computer Engineering

good performance among many scenes However the coeffi-cients and the sparse error matrix used in this method needan iterative algorithm to compute and the space and timecomplexity are too large to meet the real-time tracking

Motivated by the aforementioned work this paperpresents a robust and fast ℓ

2norm tracking algorithm with

adaptive appearance model The contributions of this workare threefold (1) we exploit the strength of 2DPCA sub-space representation using ℓ

2-regularization (2) we remove

the trivial templates from the sparse tracking method (3)we present a novel likelihood function that considers thereconstruction error which is concluded from the orthogonalleft-projection matrix and the orthogonal right-projectionmatrix Both qualitative and quantitative evaluation on videosequences demonstrate that the proposedmethod can handleocclusion illumination changes scale changes and no-rigid appearance changes effectively in a lower computationcomplexity and can run in real time

2 Object Representation via2DPCA and ℓ

2-Regularization

Principal component analysis (PCA) is a classical featureextraction and data representation technique widely usedin the areas of pattern recognition and computer visionCompared with PCA two-dimensional principal componentanalysis (2DPCA) [15] is based on 2Dmatrices rather than 1Dvectors So the image matrix does not need to be previouslytransformed into vector That is the extraction of imagefeatures is computationally more efficient using 2DPCA thanPCA In this paper we represent the object by using 2D basismatrices Given a series of image matrices [B

1B2 B

119870]

the projection coefficients matrices [A1A2 A

119870] can be

got by solving the following function

minUVA

1

119870

119870

sum

119894=1

10038171003817100381710038171003817B minus UAVT10038171003817100381710038171003817

2

119865 (1)

where sdot 119865denotes the Frobenius norm U represents the

orthogonal left-projection matrix V represents the orthogo-nal right-projection matrix

The cost function is set as an ℓ2-regularization quadratic

function

119869 (A) = 10038171003817100381710038171003817B minus UAVT100381710038171003817100381710038172

119865+ 120582 A2

119865

=1

2tr (B minus UAVT

)T(B minus UAVT

) + 120582 A2119865

=1

2tr (BT

minus VATUT) (B minus UAVT

) + 120582 A2119865

=1

2tr BTB minus BTUAVT

minus VATUTB + VATUTUAVT

+ 120582 A2119865

=1

2tr BTB minus tr BTUAVT

+1

2VATUTUAVT

+ 120582 A2119865

(2)

Here 120582 is a constant The solution of (2) is easily derived asfollows

120597119869 (A)120597A

= minusUTBV + UTUAVTV + 2120582A = 0

dArr

UTUAVTV + 2120582A = UTBV

UTUAVTV + 2120582I1AI2= UTBV

VTV otimes UTU + IT2otimes 2120582I1minus1

vec (A) = vec (UTBV)

vec (A) = VTV otimes UTU + IT2otimes 2120582I1 vec (UTBV)

(3)

Here I1and I

2mean the identity matrix otimes stands for

Kronecker product vec(A) means the vector-version of thematrix A Therefore we can get the projection coefficientsmatrix A Let P = VTV otimes UTU + IT

2otimes 2120582I

2 Obviously

the projection matrix P is independent from B and wecan precalculate it in each frame before circulation for allcandidates When a new candidate comes we can simplycalculate P vec(UTBV) to obtain vec(A) which makes theproposed method very fast

Here we abandon the trivial templates completely whichmakes the target able to be represented by the 2DPCAsubspace fully The error matrix can be obtained by thefollowing equation after we get the projection coefficientsmatrix A from (3)

E = B minus UAVT (4)

So the error matrix can be calculated once

3 Tracking Framework Based on 2DPCA andℓ2-Regularization

Visual tracking is treated as a Bayesian inference task in aMarkov model with hidden state variables Given a series ofimage matrices 119861 = [B

1B2 B

119870] we aim to estimate the

hidden state variable x119905recursively

119901 (x119905| 119861)

prop 119901 (B119905| x119905) int119901 (x

119905| x119905minus1) 119901 (x119905minus1| 119861119905minus1) 119889x119905minus1

(5)

where 119901(x119905| x119905minus1) is the motion model that represents

the state transition between two consecutive states 119901(B119905|

x119905) is the observation model which indicates the likelihood

function

Motion Model We apply an affine image warp to modelthe target motion between consecutive states Six parametersof the affine transform are used to model 119901(x

119905| x119905minus1)

of a tracked target Let x119905= 119909119905 119910119905 120579119905 119904119905 120572 120601119905 where 119909

119905

119910119905 120579119905 119904119905 120572 and 120601

119905denote 119909 and 119910 translations rotation

angle scale and aspect ration and skew respectively Thestate transition is formulated by random walk that is

Journal of Electrical and Computer Engineering 3

119901(x119905| x119905minus1) = 119873(x

119905 x119905minus1 Σ) where Σ is a diagonal covariance

matrix which indicates the variances of affine parameters

Observation Model If no occlusion occurs an image obser-vation B

119905can be generated by a 2DPCA subspace (spanned

by U and V and centered at 120583) Here we consider thepartial occlusion in the appearancemodel for robust trackingThus we assume that the centered image matrices B

119905(B119905=

B119905minus 120583) can be represented by the linear combination of the

projection matricesU and V Then we draw119873 candidates inthe state x

119905 For each of the observed imagematrices we solve

a ℓ2-regularization problem

minA119894

100381710038171003817100381710038171003817B119894 minus UA119894VT100381710038171003817100381710038171003817

2

119865

+ 12058210038171003817100381710038171003817A119894100381710038171003817100381710038172

2 (6)

where 119894 denotes the 119894th sample of the state x Thus we obtainA119894 and the likelihood can be measured by the reconstructionerror

119901 (B119894 | B119894) = exp(minus100381710038171003817100381710038171003817B119894 minus UA119894VT100381710038171003817100381710038171003817

2

119865

) (7)

However it is noted that just by penalizing error levelthe precise location of the tracked target can be benefitedTherefore we present a novel likelihood function whichconsiders both the reconstruction error and the level of errormatrix

119901 (B119894 | B119894) = exp(minus100381710038171003817100381710038171003817B119894 minus UA119894VT

minus E119894100381710038171003817100381710038171003817

2

119865

minus 12058210038171003817100381710038171003817E119894100381710038171003817100381710038171) (8)

where E119894 can be calculated by (9)

E119894 = B119894 minus UA119894VT (9)

Here A119894 is calculated by (3)

Online Update In order to handle the appearance changeof tracked target it is necessary to update the observa-tion model If some imprecise samples are used to updatethe tracked model may degrade Therefore we present anocclusion-radio-based update mechanism After obtainingthe best candidate state of each frame we compute thecorresponding error matrix and the occlusion ratio 120574 Twothresholds thr

1= 01 and thr

2= 06 are introduced to define

the degree of occlusion If 120574 lt thr1 the tracked target

is not occluded or a small part of it is occluded by thenoise Therefore the model with sample is updated directlyIf thr1lt 120574 lt thr

2 the tracked target is partially occluded

The occluded part is replaced by the average observationand the recovered candidate is used for update If 120574 gt

thr2 most part of the tracked target is occluded Therefore

the sample is discarded without update After we cumulateenough samples we use an incremental 2DPCA algorithm toupdate the tracker (left- and right-projection matrices)

4 Experiments

The proposed tracking algorithm is implemented in MAT-LAB which runs on a computer with Intel i5-3210 CPU

(25GHz) with 4GB memory The regularization 120582 is setto 005 The image observation is resized to pixels for theproposed 2DPCA representation For each sequence thelocation of the tracked target object is manually labeled inthe first frame 600 particles are adopted for the proposedalgorithm accounting for the trade-off between effectivenessand speed Our tracker is incrementally updated every 5frames

To demonstrate the effectiveness of the proposed trackingalgorithm we select six state-of-the-art trackers the ℓ

1

tracker [10] the PN tracker [16] the VTD tracker [17] theMIL tracker [1] the Frag tracker [18] and the 2DPCAℓ

1

tracker [14] for comparison on several challenging imagesequences including Occlusion 1 David Outdoor Caviar 2Girl Car 4 Car 11 Singer 1Deer Jumping and Lemming Thechallenging factors include severe occlusion pose changemotion blur illumination variation and background clutter

41 Qualitative Evaluation

Severe Occlusion We test four sequences (Occlusion 1 David-Outdoor Caviar 2 and Girl) with long time partial or heavyocclusion and scale change Figure 1(a) demonstrates that ℓ

1

algorithm Frag algorithm 2DPCAℓ1 and our algorithms

perform better since these methods take partial occlusioninto account ℓ

1algorithm 2DPCAℓ

1 and our algorithms

can handle occlusion by avoiding updating occluded pixelsinto the PCA basis and 2DPCA basis separately Frag algo-rithm can work well on some simple occlusion cases (egFigure 1(a) Occlusion 1) via the part-based representationHowever this method performs poorly on some more chal-lenging videos (eg Figure 1(b)DavidOutdoor) MIL trackeris not able to track the occluded target in DavidOutdoorand Caviar 2 since the Harr-like features the MIL methodadopted are less effective in distinguishing the similar objectsFor the Girl video the in- and out-of-plane rotation partialocclusion and the scale change make it difficult to track Itcan be seen that the Frag and the proposed tracker workbetter than the other methods

Illumination Change Figures 1(e) and 1(f) present trackingresults using the Car 4 Car 11 and Singer 1 sequenceswith significant change of illumination and scale as well asbackground clutter The ℓ

1tracker 2DPCAℓ

1tracker and

the proposed tracker perform well in the Car 4 sequenceswhereas the other trackers drift away when the target vehiclegoes underneath the overpass or the trees For Car 11sequences 2DPCAℓ

1and the proposed tracker can achieve

robust tracking results whereas the other trackers drift awaywhen drastic illumination change occurs or when similarobject appears In the Singer 1 sequence the drastic illumi-nation and scale change make it difficult to track It can beseen that the proposed tracker performs better than the othermethods

Motion Blur It is difficult for tracking algorithms to predictthe location of the tracked objects when the target movesabruptly Figures 1(h) and 1(i) demonstrate the trackingresults in theDeer and Jumping sequences InDeer sequences

4 Journal of Electrical and Computer Engineering

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(a)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(b)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(c)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(d)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(e)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(f)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(g)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(h)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(i)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(j)

Figure 1 Sampled tracking results of partial evaluated algorithms on ten challenging sequences

the animal appearance is almost indistinguishable due to thefast motion and most methods lost the target right at thebeginning of the video At frame 53 PN tracker locates thesimilar deer instead of the right object From the results wecan see that the VTD tracker and our tracker perform betterthan the other algorithms 2DPCAℓ

1tracker may be able to

track the target again by chance after failure The appearancechanges of the Jumping sequences are drastic such that theℓ1 Frag and VTD tracker drift away from the object Our

tracker successfully keeps track of the object with small errors

whereas the MIL PN and 2DPCAℓ1can track the target in

some frames

Background Clutter Figure 1(j) illustrates the tracking resultsin the Lemming sequences with scale and pose change as wellas severe occlusion in cluttered background Frag tracker lostthe target object at the beginning of the sequence and whenthe target object moves quickly or rotates the VTD trackerfails too In contrast the proposed method can adapt to theheavy occlusion in-plane rotation and scale change

Journal of Electrical and Computer Engineering 5

0 100 200 300 400 500 600 700 800 9000

20406080

100120

Frame number

Cen

ter e

rror

Occlusion 1

0 50 100 150 200 250 3000

50100150200250300350400450

Frame number

Cen

ter e

rror

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

20406080

100120140160

Frame number

Cen

ter e

rror

Caviar 2

0 100 200 300 400 500 6000

50

100

150

200

250

Frame numberC

ente

r err

or

Girl

0 100 200 300 400 500 600 7000

50100150200250300350400

Frame number

Cen

ter e

rror

Car 4

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 350 4000

20406080

100120140160

Car 11

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120

Singer 1

Frame number

Cen

ter e

rror

0 10 20 30 40 50 60 70 800

50100150200250300350

Deer

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120140160180200

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

Frame number

Cen

ter e

rror

0 200 400 600 800 1000 1200 14000

50100150200250300350400450500

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 2 Center location error (in pixels) evaluation

6 Journal of Electrical and Computer Engineering

0 100 200 300 400 500 600 700 800 9000

010203040506070809

1

Ove

rlap

rate

Frame number

Occlusion 1

0 50 100 150 200 250 3000

010203040506070809

1

Ove

rlap

rate

Frame number

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

010203040506070809

1

Ove

rlap

rate

Frame number

Caviar 2

0 100 200 300 400 500 6000

010203040506070809

1

Ove

rlap

rate

Frame number

Girl

0 100 200 300 400 500 600 7000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 4

0 50 100 150 200 250 300 350 4000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 11

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

Singer 1

0 10 20 30 40 50 60 70 800

010203040506070809

1

Ove

rlap

rate

Frame number

Deer

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

0 200 400 600 800 1000 1200 14000

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 3 Overlap rate evaluation

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

2 Journal of Electrical and Computer Engineering

good performance among many scenes However the coeffi-cients and the sparse error matrix used in this method needan iterative algorithm to compute and the space and timecomplexity are too large to meet the real-time tracking

Motivated by the aforementioned work this paperpresents a robust and fast ℓ

2norm tracking algorithm with

adaptive appearance model The contributions of this workare threefold (1) we exploit the strength of 2DPCA sub-space representation using ℓ

2-regularization (2) we remove

the trivial templates from the sparse tracking method (3)we present a novel likelihood function that considers thereconstruction error which is concluded from the orthogonalleft-projection matrix and the orthogonal right-projectionmatrix Both qualitative and quantitative evaluation on videosequences demonstrate that the proposedmethod can handleocclusion illumination changes scale changes and no-rigid appearance changes effectively in a lower computationcomplexity and can run in real time

2 Object Representation via2DPCA and ℓ

2-Regularization

Principal component analysis (PCA) is a classical featureextraction and data representation technique widely usedin the areas of pattern recognition and computer visionCompared with PCA two-dimensional principal componentanalysis (2DPCA) [15] is based on 2Dmatrices rather than 1Dvectors So the image matrix does not need to be previouslytransformed into vector That is the extraction of imagefeatures is computationally more efficient using 2DPCA thanPCA In this paper we represent the object by using 2D basismatrices Given a series of image matrices [B

1B2 B

119870]

the projection coefficients matrices [A1A2 A

119870] can be

got by solving the following function

minUVA

1

119870

119870

sum

119894=1

10038171003817100381710038171003817B minus UAVT10038171003817100381710038171003817

2

119865 (1)

where sdot 119865denotes the Frobenius norm U represents the

orthogonal left-projection matrix V represents the orthogo-nal right-projection matrix

The cost function is set as an ℓ2-regularization quadratic

function

119869 (A) = 10038171003817100381710038171003817B minus UAVT100381710038171003817100381710038172

119865+ 120582 A2

119865

=1

2tr (B minus UAVT

)T(B minus UAVT

) + 120582 A2119865

=1

2tr (BT

minus VATUT) (B minus UAVT

) + 120582 A2119865

=1

2tr BTB minus BTUAVT

minus VATUTB + VATUTUAVT

+ 120582 A2119865

=1

2tr BTB minus tr BTUAVT

+1

2VATUTUAVT

+ 120582 A2119865

(2)

Here 120582 is a constant The solution of (2) is easily derived asfollows

120597119869 (A)120597A

= minusUTBV + UTUAVTV + 2120582A = 0

dArr

UTUAVTV + 2120582A = UTBV

UTUAVTV + 2120582I1AI2= UTBV

VTV otimes UTU + IT2otimes 2120582I1minus1

vec (A) = vec (UTBV)

vec (A) = VTV otimes UTU + IT2otimes 2120582I1 vec (UTBV)

(3)

Here I1and I

2mean the identity matrix otimes stands for

Kronecker product vec(A) means the vector-version of thematrix A Therefore we can get the projection coefficientsmatrix A Let P = VTV otimes UTU + IT

2otimes 2120582I

2 Obviously

the projection matrix P is independent from B and wecan precalculate it in each frame before circulation for allcandidates When a new candidate comes we can simplycalculate P vec(UTBV) to obtain vec(A) which makes theproposed method very fast

Here we abandon the trivial templates completely whichmakes the target able to be represented by the 2DPCAsubspace fully The error matrix can be obtained by thefollowing equation after we get the projection coefficientsmatrix A from (3)

E = B minus UAVT (4)

So the error matrix can be calculated once

3 Tracking Framework Based on 2DPCA andℓ2-Regularization

Visual tracking is treated as a Bayesian inference task in aMarkov model with hidden state variables Given a series ofimage matrices 119861 = [B

1B2 B

119870] we aim to estimate the

hidden state variable x119905recursively

119901 (x119905| 119861)

prop 119901 (B119905| x119905) int119901 (x

119905| x119905minus1) 119901 (x119905minus1| 119861119905minus1) 119889x119905minus1

(5)

where 119901(x119905| x119905minus1) is the motion model that represents

the state transition between two consecutive states 119901(B119905|

x119905) is the observation model which indicates the likelihood

function

Motion Model We apply an affine image warp to modelthe target motion between consecutive states Six parametersof the affine transform are used to model 119901(x

119905| x119905minus1)

of a tracked target Let x119905= 119909119905 119910119905 120579119905 119904119905 120572 120601119905 where 119909

119905

119910119905 120579119905 119904119905 120572 and 120601

119905denote 119909 and 119910 translations rotation

angle scale and aspect ration and skew respectively Thestate transition is formulated by random walk that is

Journal of Electrical and Computer Engineering 3

119901(x119905| x119905minus1) = 119873(x

119905 x119905minus1 Σ) where Σ is a diagonal covariance

matrix which indicates the variances of affine parameters

Observation Model If no occlusion occurs an image obser-vation B

119905can be generated by a 2DPCA subspace (spanned

by U and V and centered at 120583) Here we consider thepartial occlusion in the appearancemodel for robust trackingThus we assume that the centered image matrices B

119905(B119905=

B119905minus 120583) can be represented by the linear combination of the

projection matricesU and V Then we draw119873 candidates inthe state x

119905 For each of the observed imagematrices we solve

a ℓ2-regularization problem

minA119894

100381710038171003817100381710038171003817B119894 minus UA119894VT100381710038171003817100381710038171003817

2

119865

+ 12058210038171003817100381710038171003817A119894100381710038171003817100381710038172

2 (6)

where 119894 denotes the 119894th sample of the state x Thus we obtainA119894 and the likelihood can be measured by the reconstructionerror

119901 (B119894 | B119894) = exp(minus100381710038171003817100381710038171003817B119894 minus UA119894VT100381710038171003817100381710038171003817

2

119865

) (7)

However it is noted that just by penalizing error levelthe precise location of the tracked target can be benefitedTherefore we present a novel likelihood function whichconsiders both the reconstruction error and the level of errormatrix

119901 (B119894 | B119894) = exp(minus100381710038171003817100381710038171003817B119894 minus UA119894VT

minus E119894100381710038171003817100381710038171003817

2

119865

minus 12058210038171003817100381710038171003817E119894100381710038171003817100381710038171) (8)

where E119894 can be calculated by (9)

E119894 = B119894 minus UA119894VT (9)

Here A119894 is calculated by (3)

Online Update In order to handle the appearance changeof tracked target it is necessary to update the observa-tion model If some imprecise samples are used to updatethe tracked model may degrade Therefore we present anocclusion-radio-based update mechanism After obtainingthe best candidate state of each frame we compute thecorresponding error matrix and the occlusion ratio 120574 Twothresholds thr

1= 01 and thr

2= 06 are introduced to define

the degree of occlusion If 120574 lt thr1 the tracked target

is not occluded or a small part of it is occluded by thenoise Therefore the model with sample is updated directlyIf thr1lt 120574 lt thr

2 the tracked target is partially occluded

The occluded part is replaced by the average observationand the recovered candidate is used for update If 120574 gt

thr2 most part of the tracked target is occluded Therefore

the sample is discarded without update After we cumulateenough samples we use an incremental 2DPCA algorithm toupdate the tracker (left- and right-projection matrices)

4 Experiments

The proposed tracking algorithm is implemented in MAT-LAB which runs on a computer with Intel i5-3210 CPU

(25GHz) with 4GB memory The regularization 120582 is setto 005 The image observation is resized to pixels for theproposed 2DPCA representation For each sequence thelocation of the tracked target object is manually labeled inthe first frame 600 particles are adopted for the proposedalgorithm accounting for the trade-off between effectivenessand speed Our tracker is incrementally updated every 5frames

To demonstrate the effectiveness of the proposed trackingalgorithm we select six state-of-the-art trackers the ℓ

1

tracker [10] the PN tracker [16] the VTD tracker [17] theMIL tracker [1] the Frag tracker [18] and the 2DPCAℓ

1

tracker [14] for comparison on several challenging imagesequences including Occlusion 1 David Outdoor Caviar 2Girl Car 4 Car 11 Singer 1Deer Jumping and Lemming Thechallenging factors include severe occlusion pose changemotion blur illumination variation and background clutter

41 Qualitative Evaluation

Severe Occlusion We test four sequences (Occlusion 1 David-Outdoor Caviar 2 and Girl) with long time partial or heavyocclusion and scale change Figure 1(a) demonstrates that ℓ

1

algorithm Frag algorithm 2DPCAℓ1 and our algorithms

perform better since these methods take partial occlusioninto account ℓ

1algorithm 2DPCAℓ

1 and our algorithms

can handle occlusion by avoiding updating occluded pixelsinto the PCA basis and 2DPCA basis separately Frag algo-rithm can work well on some simple occlusion cases (egFigure 1(a) Occlusion 1) via the part-based representationHowever this method performs poorly on some more chal-lenging videos (eg Figure 1(b)DavidOutdoor) MIL trackeris not able to track the occluded target in DavidOutdoorand Caviar 2 since the Harr-like features the MIL methodadopted are less effective in distinguishing the similar objectsFor the Girl video the in- and out-of-plane rotation partialocclusion and the scale change make it difficult to track Itcan be seen that the Frag and the proposed tracker workbetter than the other methods

Illumination Change Figures 1(e) and 1(f) present trackingresults using the Car 4 Car 11 and Singer 1 sequenceswith significant change of illumination and scale as well asbackground clutter The ℓ

1tracker 2DPCAℓ

1tracker and

the proposed tracker perform well in the Car 4 sequenceswhereas the other trackers drift away when the target vehiclegoes underneath the overpass or the trees For Car 11sequences 2DPCAℓ

1and the proposed tracker can achieve

robust tracking results whereas the other trackers drift awaywhen drastic illumination change occurs or when similarobject appears In the Singer 1 sequence the drastic illumi-nation and scale change make it difficult to track It can beseen that the proposed tracker performs better than the othermethods

Motion Blur It is difficult for tracking algorithms to predictthe location of the tracked objects when the target movesabruptly Figures 1(h) and 1(i) demonstrate the trackingresults in theDeer and Jumping sequences InDeer sequences

4 Journal of Electrical and Computer Engineering

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(a)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(b)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(c)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(d)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(e)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(f)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(g)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(h)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(i)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(j)

Figure 1 Sampled tracking results of partial evaluated algorithms on ten challenging sequences

the animal appearance is almost indistinguishable due to thefast motion and most methods lost the target right at thebeginning of the video At frame 53 PN tracker locates thesimilar deer instead of the right object From the results wecan see that the VTD tracker and our tracker perform betterthan the other algorithms 2DPCAℓ

1tracker may be able to

track the target again by chance after failure The appearancechanges of the Jumping sequences are drastic such that theℓ1 Frag and VTD tracker drift away from the object Our

tracker successfully keeps track of the object with small errors

whereas the MIL PN and 2DPCAℓ1can track the target in

some frames

Background Clutter Figure 1(j) illustrates the tracking resultsin the Lemming sequences with scale and pose change as wellas severe occlusion in cluttered background Frag tracker lostthe target object at the beginning of the sequence and whenthe target object moves quickly or rotates the VTD trackerfails too In contrast the proposed method can adapt to theheavy occlusion in-plane rotation and scale change

Journal of Electrical and Computer Engineering 5

0 100 200 300 400 500 600 700 800 9000

20406080

100120

Frame number

Cen

ter e

rror

Occlusion 1

0 50 100 150 200 250 3000

50100150200250300350400450

Frame number

Cen

ter e

rror

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

20406080

100120140160

Frame number

Cen

ter e

rror

Caviar 2

0 100 200 300 400 500 6000

50

100

150

200

250

Frame numberC

ente

r err

or

Girl

0 100 200 300 400 500 600 7000

50100150200250300350400

Frame number

Cen

ter e

rror

Car 4

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 350 4000

20406080

100120140160

Car 11

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120

Singer 1

Frame number

Cen

ter e

rror

0 10 20 30 40 50 60 70 800

50100150200250300350

Deer

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120140160180200

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

Frame number

Cen

ter e

rror

0 200 400 600 800 1000 1200 14000

50100150200250300350400450500

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 2 Center location error (in pixels) evaluation

6 Journal of Electrical and Computer Engineering

0 100 200 300 400 500 600 700 800 9000

010203040506070809

1

Ove

rlap

rate

Frame number

Occlusion 1

0 50 100 150 200 250 3000

010203040506070809

1

Ove

rlap

rate

Frame number

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

010203040506070809

1

Ove

rlap

rate

Frame number

Caviar 2

0 100 200 300 400 500 6000

010203040506070809

1

Ove

rlap

rate

Frame number

Girl

0 100 200 300 400 500 600 7000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 4

0 50 100 150 200 250 300 350 4000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 11

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

Singer 1

0 10 20 30 40 50 60 70 800

010203040506070809

1

Ove

rlap

rate

Frame number

Deer

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

0 200 400 600 800 1000 1200 14000

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 3 Overlap rate evaluation

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

Journal of Electrical and Computer Engineering 3

119901(x119905| x119905minus1) = 119873(x

119905 x119905minus1 Σ) where Σ is a diagonal covariance

matrix which indicates the variances of affine parameters

Observation Model If no occlusion occurs an image obser-vation B

119905can be generated by a 2DPCA subspace (spanned

by U and V and centered at 120583) Here we consider thepartial occlusion in the appearancemodel for robust trackingThus we assume that the centered image matrices B

119905(B119905=

B119905minus 120583) can be represented by the linear combination of the

projection matricesU and V Then we draw119873 candidates inthe state x

119905 For each of the observed imagematrices we solve

a ℓ2-regularization problem

minA119894

100381710038171003817100381710038171003817B119894 minus UA119894VT100381710038171003817100381710038171003817

2

119865

+ 12058210038171003817100381710038171003817A119894100381710038171003817100381710038172

2 (6)

where 119894 denotes the 119894th sample of the state x Thus we obtainA119894 and the likelihood can be measured by the reconstructionerror

119901 (B119894 | B119894) = exp(minus100381710038171003817100381710038171003817B119894 minus UA119894VT100381710038171003817100381710038171003817

2

119865

) (7)

However it is noted that just by penalizing error levelthe precise location of the tracked target can be benefitedTherefore we present a novel likelihood function whichconsiders both the reconstruction error and the level of errormatrix

119901 (B119894 | B119894) = exp(minus100381710038171003817100381710038171003817B119894 minus UA119894VT

minus E119894100381710038171003817100381710038171003817

2

119865

minus 12058210038171003817100381710038171003817E119894100381710038171003817100381710038171) (8)

where E119894 can be calculated by (9)

E119894 = B119894 minus UA119894VT (9)

Here A119894 is calculated by (3)

Online Update In order to handle the appearance changeof tracked target it is necessary to update the observa-tion model If some imprecise samples are used to updatethe tracked model may degrade Therefore we present anocclusion-radio-based update mechanism After obtainingthe best candidate state of each frame we compute thecorresponding error matrix and the occlusion ratio 120574 Twothresholds thr

1= 01 and thr

2= 06 are introduced to define

the degree of occlusion If 120574 lt thr1 the tracked target

is not occluded or a small part of it is occluded by thenoise Therefore the model with sample is updated directlyIf thr1lt 120574 lt thr

2 the tracked target is partially occluded

The occluded part is replaced by the average observationand the recovered candidate is used for update If 120574 gt

thr2 most part of the tracked target is occluded Therefore

the sample is discarded without update After we cumulateenough samples we use an incremental 2DPCA algorithm toupdate the tracker (left- and right-projection matrices)

4 Experiments

The proposed tracking algorithm is implemented in MAT-LAB which runs on a computer with Intel i5-3210 CPU

(25GHz) with 4GB memory The regularization 120582 is setto 005 The image observation is resized to pixels for theproposed 2DPCA representation For each sequence thelocation of the tracked target object is manually labeled inthe first frame 600 particles are adopted for the proposedalgorithm accounting for the trade-off between effectivenessand speed Our tracker is incrementally updated every 5frames

To demonstrate the effectiveness of the proposed trackingalgorithm we select six state-of-the-art trackers the ℓ

1

tracker [10] the PN tracker [16] the VTD tracker [17] theMIL tracker [1] the Frag tracker [18] and the 2DPCAℓ

1

tracker [14] for comparison on several challenging imagesequences including Occlusion 1 David Outdoor Caviar 2Girl Car 4 Car 11 Singer 1Deer Jumping and Lemming Thechallenging factors include severe occlusion pose changemotion blur illumination variation and background clutter

41 Qualitative Evaluation

Severe Occlusion We test four sequences (Occlusion 1 David-Outdoor Caviar 2 and Girl) with long time partial or heavyocclusion and scale change Figure 1(a) demonstrates that ℓ

1

algorithm Frag algorithm 2DPCAℓ1 and our algorithms

perform better since these methods take partial occlusioninto account ℓ

1algorithm 2DPCAℓ

1 and our algorithms

can handle occlusion by avoiding updating occluded pixelsinto the PCA basis and 2DPCA basis separately Frag algo-rithm can work well on some simple occlusion cases (egFigure 1(a) Occlusion 1) via the part-based representationHowever this method performs poorly on some more chal-lenging videos (eg Figure 1(b)DavidOutdoor) MIL trackeris not able to track the occluded target in DavidOutdoorand Caviar 2 since the Harr-like features the MIL methodadopted are less effective in distinguishing the similar objectsFor the Girl video the in- and out-of-plane rotation partialocclusion and the scale change make it difficult to track Itcan be seen that the Frag and the proposed tracker workbetter than the other methods

Illumination Change Figures 1(e) and 1(f) present trackingresults using the Car 4 Car 11 and Singer 1 sequenceswith significant change of illumination and scale as well asbackground clutter The ℓ

1tracker 2DPCAℓ

1tracker and

the proposed tracker perform well in the Car 4 sequenceswhereas the other trackers drift away when the target vehiclegoes underneath the overpass or the trees For Car 11sequences 2DPCAℓ

1and the proposed tracker can achieve

robust tracking results whereas the other trackers drift awaywhen drastic illumination change occurs or when similarobject appears In the Singer 1 sequence the drastic illumi-nation and scale change make it difficult to track It can beseen that the proposed tracker performs better than the othermethods

Motion Blur It is difficult for tracking algorithms to predictthe location of the tracked objects when the target movesabruptly Figures 1(h) and 1(i) demonstrate the trackingresults in theDeer and Jumping sequences InDeer sequences

4 Journal of Electrical and Computer Engineering

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(a)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(b)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(c)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(d)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(e)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(f)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(g)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(h)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(i)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(j)

Figure 1 Sampled tracking results of partial evaluated algorithms on ten challenging sequences

the animal appearance is almost indistinguishable due to thefast motion and most methods lost the target right at thebeginning of the video At frame 53 PN tracker locates thesimilar deer instead of the right object From the results wecan see that the VTD tracker and our tracker perform betterthan the other algorithms 2DPCAℓ

1tracker may be able to

track the target again by chance after failure The appearancechanges of the Jumping sequences are drastic such that theℓ1 Frag and VTD tracker drift away from the object Our

tracker successfully keeps track of the object with small errors

whereas the MIL PN and 2DPCAℓ1can track the target in

some frames

Background Clutter Figure 1(j) illustrates the tracking resultsin the Lemming sequences with scale and pose change as wellas severe occlusion in cluttered background Frag tracker lostthe target object at the beginning of the sequence and whenthe target object moves quickly or rotates the VTD trackerfails too In contrast the proposed method can adapt to theheavy occlusion in-plane rotation and scale change

Journal of Electrical and Computer Engineering 5

0 100 200 300 400 500 600 700 800 9000

20406080

100120

Frame number

Cen

ter e

rror

Occlusion 1

0 50 100 150 200 250 3000

50100150200250300350400450

Frame number

Cen

ter e

rror

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

20406080

100120140160

Frame number

Cen

ter e

rror

Caviar 2

0 100 200 300 400 500 6000

50

100

150

200

250

Frame numberC

ente

r err

or

Girl

0 100 200 300 400 500 600 7000

50100150200250300350400

Frame number

Cen

ter e

rror

Car 4

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 350 4000

20406080

100120140160

Car 11

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120

Singer 1

Frame number

Cen

ter e

rror

0 10 20 30 40 50 60 70 800

50100150200250300350

Deer

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120140160180200

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

Frame number

Cen

ter e

rror

0 200 400 600 800 1000 1200 14000

50100150200250300350400450500

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 2 Center location error (in pixels) evaluation

6 Journal of Electrical and Computer Engineering

0 100 200 300 400 500 600 700 800 9000

010203040506070809

1

Ove

rlap

rate

Frame number

Occlusion 1

0 50 100 150 200 250 3000

010203040506070809

1

Ove

rlap

rate

Frame number

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

010203040506070809

1

Ove

rlap

rate

Frame number

Caviar 2

0 100 200 300 400 500 6000

010203040506070809

1

Ove

rlap

rate

Frame number

Girl

0 100 200 300 400 500 600 7000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 4

0 50 100 150 200 250 300 350 4000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 11

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

Singer 1

0 10 20 30 40 50 60 70 800

010203040506070809

1

Ove

rlap

rate

Frame number

Deer

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

0 200 400 600 800 1000 1200 14000

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 3 Overlap rate evaluation

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

4 Journal of Electrical and Computer Engineering

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(a)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(b)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(c)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(d)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(e)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(f)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(g)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(h)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(i)

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

(j)

Figure 1 Sampled tracking results of partial evaluated algorithms on ten challenging sequences

the animal appearance is almost indistinguishable due to thefast motion and most methods lost the target right at thebeginning of the video At frame 53 PN tracker locates thesimilar deer instead of the right object From the results wecan see that the VTD tracker and our tracker perform betterthan the other algorithms 2DPCAℓ

1tracker may be able to

track the target again by chance after failure The appearancechanges of the Jumping sequences are drastic such that theℓ1 Frag and VTD tracker drift away from the object Our

tracker successfully keeps track of the object with small errors

whereas the MIL PN and 2DPCAℓ1can track the target in

some frames

Background Clutter Figure 1(j) illustrates the tracking resultsin the Lemming sequences with scale and pose change as wellas severe occlusion in cluttered background Frag tracker lostthe target object at the beginning of the sequence and whenthe target object moves quickly or rotates the VTD trackerfails too In contrast the proposed method can adapt to theheavy occlusion in-plane rotation and scale change

Journal of Electrical and Computer Engineering 5

0 100 200 300 400 500 600 700 800 9000

20406080

100120

Frame number

Cen

ter e

rror

Occlusion 1

0 50 100 150 200 250 3000

50100150200250300350400450

Frame number

Cen

ter e

rror

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

20406080

100120140160

Frame number

Cen

ter e

rror

Caviar 2

0 100 200 300 400 500 6000

50

100

150

200

250

Frame numberC

ente

r err

or

Girl

0 100 200 300 400 500 600 7000

50100150200250300350400

Frame number

Cen

ter e

rror

Car 4

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 350 4000

20406080

100120140160

Car 11

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120

Singer 1

Frame number

Cen

ter e

rror

0 10 20 30 40 50 60 70 800

50100150200250300350

Deer

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120140160180200

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

Frame number

Cen

ter e

rror

0 200 400 600 800 1000 1200 14000

50100150200250300350400450500

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 2 Center location error (in pixels) evaluation

6 Journal of Electrical and Computer Engineering

0 100 200 300 400 500 600 700 800 9000

010203040506070809

1

Ove

rlap

rate

Frame number

Occlusion 1

0 50 100 150 200 250 3000

010203040506070809

1

Ove

rlap

rate

Frame number

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

010203040506070809

1

Ove

rlap

rate

Frame number

Caviar 2

0 100 200 300 400 500 6000

010203040506070809

1

Ove

rlap

rate

Frame number

Girl

0 100 200 300 400 500 600 7000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 4

0 50 100 150 200 250 300 350 4000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 11

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

Singer 1

0 10 20 30 40 50 60 70 800

010203040506070809

1

Ove

rlap

rate

Frame number

Deer

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

0 200 400 600 800 1000 1200 14000

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 3 Overlap rate evaluation

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

Journal of Electrical and Computer Engineering 5

0 100 200 300 400 500 600 700 800 9000

20406080

100120

Frame number

Cen

ter e

rror

Occlusion 1

0 50 100 150 200 250 3000

50100150200250300350400450

Frame number

Cen

ter e

rror

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

20406080

100120140160

Frame number

Cen

ter e

rror

Caviar 2

0 100 200 300 400 500 6000

50

100

150

200

250

Frame numberC

ente

r err

or

Girl

0 100 200 300 400 500 600 7000

50100150200250300350400

Frame number

Cen

ter e

rror

Car 4

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 350 4000

20406080

100120140160

Car 11

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120

Singer 1

Frame number

Cen

ter e

rror

0 10 20 30 40 50 60 70 800

50100150200250300350

Deer

Frame number

Cen

ter e

rror

0 50 100 150 200 250 300 3500

20406080

100120140160180200

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

Frame number

Cen

ter e

rror

0 200 400 600 800 1000 1200 14000

50100150200250300350400450500

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 2 Center location error (in pixels) evaluation

6 Journal of Electrical and Computer Engineering

0 100 200 300 400 500 600 700 800 9000

010203040506070809

1

Ove

rlap

rate

Frame number

Occlusion 1

0 50 100 150 200 250 3000

010203040506070809

1

Ove

rlap

rate

Frame number

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

010203040506070809

1

Ove

rlap

rate

Frame number

Caviar 2

0 100 200 300 400 500 6000

010203040506070809

1

Ove

rlap

rate

Frame number

Girl

0 100 200 300 400 500 600 7000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 4

0 50 100 150 200 250 300 350 4000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 11

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

Singer 1

0 10 20 30 40 50 60 70 800

010203040506070809

1

Ove

rlap

rate

Frame number

Deer

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

0 200 400 600 800 1000 1200 14000

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 3 Overlap rate evaluation

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

6 Journal of Electrical and Computer Engineering

0 100 200 300 400 500 600 700 800 9000

010203040506070809

1

Ove

rlap

rate

Frame number

Occlusion 1

0 50 100 150 200 250 3000

010203040506070809

1

Ove

rlap

rate

Frame number

DavidOutdoor

0 50 100 150 200 250 300 350 400 450 5000

010203040506070809

1

Ove

rlap

rate

Frame number

Caviar 2

0 100 200 300 400 500 6000

010203040506070809

1

Ove

rlap

rate

Frame number

Girl

0 100 200 300 400 500 600 7000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 4

0 50 100 150 200 250 300 350 4000

010203040506070809

1

Ove

rlap

rate

Frame number

Car 11

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

Singer 1

0 10 20 30 40 50 60 70 800

010203040506070809

1

Ove

rlap

rate

Frame number

Deer

0 50 100 150 200 250 300 3500

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Jumping

0 200 400 600 800 1000 1200 14000

010203040506070809

1

Ove

rlap

rate

Frame number

PNVTDMIL

Frag2DPCA9857471

Ours

9857471

Lemming

Figure 3 Overlap rate evaluation

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

Journal of Electrical and Computer Engineering 7

42 Quantitative Evaluation To conduct quantitative com-parisons between the proposed tracking method and theother sate-of-the-art trackers we compute the differencebetween the predicted and the ground truth center locationerror in pixels and overlap rates which are most widelyused in quantitative evaluation The center location error isusually defined as the Euclidean distance between the centerlocations of tracked objects and their corresponding labeledground truth Figure 2 demonstrates the center error plotswhere a smaller center error means a more accurate resultin each frame Overlap rate score is defined as score =

area(119877119905cap 119877119892)area(119877

119905cup 119877119892) 119877119905is the tracked bounding

box of each frame and 119877119892is the corresponding ground

truth bounding box Figure 3 shows the overlap rates of eachtracking algorithm for all sequences Generally speaking ourtracker performs favorably against the other methods

43 Computational Complexity The most time consumingpart of the generative tracking algorithm is to compute thecoefficients using the basis vectors For the ℓ

1tracker the

computation of the coefficients using the LASSO algorithmis 119874(1198892 + 119889119896) 119889 is the dimension of subspace and 119896 is thenumber of basis vectorsThe load of the 2DPCAℓ

1tracker [10]

with ℓ1regularization is 119874(119898119889119896) 119898 stands for the number

of iterations (eg 10 on average) For our tracker the trivialtemplates are abandoned and square templates are not usedSo the load of our tracker is 119889119896 The tracking speed of ℓ

1

2DPCAℓ1 and our method is 025 fps 22 fps and 52 fps

separately (fps frame per second) Therefore we can getthat our tracker is more effective and much faster than theaforementioned trackers

5 Conclusion

In this paper we present a fast and effective tracking algo-rithm We first clarify the benefits of the utilizing 2DPCAbasis vectors Then we formulate the tracking process withthe ℓ

2-regularization Finally we update the appearance

model accounting for the partial occlusion Both qualitativeand quantitative evaluations on challenging image sequencedemonstrate that the proposed method outperforms severalstate-of-the-art trackers

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This project is supported by the Shandong Provincial NaturalScience Foundation China (no ZR2015FL009)

References

[1] B Babenko M H Yang and S Belongie ldquoVisual tracking withonline multiple instance learningrdquo in Proceedings of the 22thIEEE Conference on Computer Vision and Pattern Recognitioninpp 983ndash990 San Francisco Calif USA 2009

[2] K H Zhang L Zhang andMH Yang ldquoReal time compressivetrackingrdquo in Proceedings of 12th European Conference on Com-puter Vision pp 864ndash877 Florence Italy 2012

[3] JWright A Y Yang A Ganesh S S Sastry and YMa ldquoRobustface recognition via sparse representationrdquo IEEE Transactionson Pattern Analysis and Machine Intelligence vol 31 no 2 pp210ndash227 2009

[4] J Yang J Wright T S Huang and Y Ma ldquoImage super-resolution via sparse representationrdquo IEEE Transactions onImage Processing vol 19 no 11 pp 2861ndash2873 2010

[5] J Mairal M Elad and G Sapiro ldquoSparse representation forcolor image restorationrdquo IEEETransactions on Image Processingvol 17 no 1 pp 53ndash69 2008

[6] Y Wu J Lim and M-H Yang ldquoOnline object tracking abenchmarkrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 2411ndash2418 Portland Ore USA June 2013

[7] X Jia H Lu and M-H Yang ldquoVisual tracking via adaptivestructural local sparse appearance modelrdquo in Proceedings ofthe 2012 IEEE Conference on Computer Vision and PatternRecognition (CVPR rsquo12) pp 1822ndash1829 Providence RI USAJune 2012

[8] W Zhong H Lu and M-H Yang ldquoRobust object tracking viasparsity-based collaborative modelrdquo in Proceedings of the 25thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo12) pp 1838ndash1845 Providence RI USA June 2012

[9] T Zhang B Ghanem S Liu and N Ahuja ldquoRobust visualtracking via multi-task sparse learningrdquo in Proceedings ofthe 25th IEEE Conference on Computer Vision and PatternRecognition pp 2042ndash2049 Providence RI USA 2012

[10] X Mei and H Ling ldquoRobust visual tracking using ℓ1minimiza-

tionrdquo in Proceedings of 12th IEEE International Conference onComputer Vision pp 1436ndash1443 Kyoto Japan September 2009

[11] D A Ross J Lim R-S Lin and M-H Yang ldquoIncrementallearning for robust visual trackingrdquo International Journal ofComputer Vision vol 77 no 1ndash3 pp 125ndash141 2008

[12] B H Zhuang H Lu Z Y Xiao and D Wang ldquoVisual trackingvia discriminative sparse similarity maprdquo IEEE Transactions onImage Processing vol 23 no 4 pp 1872ndash1881 2014

[13] C Qian Y B Zhuang and Z Z Xu ldquoVisual tracking withstructural appearance model based on extended incrementalnon-negative matrix factorizationrdquo Neurocomputing vol 136pp 327ndash336 2014

[14] D Wang and H Lu ldquoObject tracking via 2DPCA and l1-regularizationrdquo IEEE Signal Processing Letters vol 19 no 11 pp711ndash714 2012

[15] DWang H Lu and X Li ldquoTwo dimensional principal compo-nents of natural images and its applicationrdquo Neurocomputingvol 74 no 17 pp 2745ndash2753 2011

[16] Z Kalal JMatas andKMikolajczyk ldquoP-N learning bootstrap-ping binary classifiers by structural constraintsrdquo in Proceedingsof the IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR rsquo10) pp 49ndash56 San FranciscoCalif USA June 2010

[17] J Kwon and K M Lee ldquoVisual tracking decompositionrdquoin Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR rsquo10) pp 1269ndash1276 IEEE San Francisco Calif USA June 2010

[18] A Adam E Rivlin and I Shimshoni ldquoRobust fragments-basedtracking using the integral histogramrdquo in Proceedings of the 19thIEEE Conference on Computer Vision and Pattern Recognitionpp 798ndash805 New York NY USA 2006

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Object Tracking via 2DPCA and -Regularizationdownloads.hindawi.com/journals/jece/2016/7975951.pdftracking as a binary classi ... presents a robust and fast 2 norm

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of


Recommended