+ All Categories
Transcript

EXPERIMENTAL RESULTS FOR MOVING OBJECT STRUCTURE ESTIMATIONUSING AN UNKNOWN INPUT OBSERVER APPROACH

Sujin Jang, Ashwin P. Dani, Carl D. Crane III and Warren E. Dixon∗

Department of Mechanical and Aerospace EngineeringUniversity of Florida

Gainesville, Florida, 32611Email: [email protected];[email protected];[email protected];[email protected]

ABSTRACTAn application and experimental verification of the online

structure from motion (SFM) method is presented to estimate theposition of a moving object using a moving camera. An unknowninput observer is implemented for the position estimation of amoving object attached to a two-link robot observed by a movingcamera attached to a PUMA robot. The velocity of the objectis considered as an unknown input to the perspective dynami-cal system. Series of experiments are performed with differentcamera and object motions. The method is used to estimate theposition of the static object as well as the moving object. The po-sition estimates are compared with ground-truth data computedusing forward kinematics of the PUMA and the two-link robot.The observer gain design problem is formulated as a convex op-timization problem to obtain an optimal observer gain.

INTRODUCTIONRecovering the structure of a moving object using a moving

camera has been well studied in literature the past decade [1–8].In [6], a batch algorithm is developed by approximating the tra-jectories of a moving object using a linear combination of dis-crete cosine transform (DCT) basis vectors. Batch algorithmsuse an algebraic relationship between 3D coordinates of points inthe camera coordinate frame and corresponding 2D projectionson the image frame collected over n images to estimate the struc-ture. Hence, batch algorithms are not useful in real-time controlalgorithms. For visual servo control or video-based surveillancetasks, online structure estimation algorithms are required.

Recently, a causal algorithm is presented in [7] to estimatethe structure and motion of objects moving with constant linear

∗Address all correspondence to this author.

velocities observed by a moving camera with known camera mo-tions. A new method based on an unknown input observer (UIO)is developed in [8] to estimate the structure of an object movingwith time-varying velocities using a moving camera with knownvelocities.

The contributions of this work is to experimentally verify theunknown input observer in [8] for structure estimation of a mov-ing object. A series of experiments are conducted on a PUMA560 and a two-link robot. A camera is attached to the PUMA andthe target is attached to the moving two-link robot. The cam-era images are processed to track a feature point while cameravelocities are measured using the joint encoders. To obtain theground-truth data, the distance between the origin of the PUMAand origin of the two-link robot is measured and positions ofthe camera and moving object with respect to respective originsare obtained using the forward kinematics of the robots. Theestimated position of object is compared with the ground-truthdata. The experiments are conducted to estimate the structure ofa static as well as a moving object keeping the same observerstructure. The experiments prove the advantage of the observerin the sense that a-priori knowledge of object state (static or mov-ing) is not required. A convex optimization problem is solvedfor computing an observer gain to reduce the effects of noise anddisturbances.

PERSPECTIVE CAMERA MODEL

In this section, the kinematic relationship of the movingcamera and the object, and geometric relationship of image for-mation is briefly described.

ASME 2012 5th Annual Dynamic Systems and Control Conference joint with theJSME 2012 11th Motion and Vibration Conference

DSCC2012-MOVIC2012

1 Copyright © 2012 by ASME

DSCC2012-MOVIC2012-8778

October 17-19, 2012, Fort Lauderdale, Florida, USA

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

Figure 1. A PERSPECTIVE PROJECTION AND KINEMATIC CAMERAMODEL.

Kinematic ModelingConsidering a moving camera observing an object, define an

inertially fixed reference frame, F : o; Ex, Ey, Ez, and a ref-erence frame fixed to a camera, C : oc; ex, ey, ez as shown inFig. 1. The position of a point p relative to a point o is denotedby rp. The position of oc (origin of C ) relative to the point o (ori-gin of F ) is denoted by roc . In the following development everyvector and tensor are expressed in terms of the basis ex, ey, ezfixed in C 1. The position of p measured relative to the point ocis expressed as

rp/oc = rp − roc =[X(t) Y (t) Z(t)

]T (1)

where X(t), Y (t) and Z(t) ∈ R. The linear velocity of the objectand the camera as viewed by an observer in the inertial referenceframe are given by

F Vp =F d

dt(rp) =

[vpx vpy vpz

]T ∈ Vp ⊂ R3, (2)

F Voc =F d

dt(roc) =

[vcx vcy vcz

]T ∈ Vc ⊂ R3. (3)

Using Eqs. (1) - (3), the velocity of p as viewed by an observerin C is given by

C Vp/oc =C d

dt(rp − roc) =

[X(t) Y (t) Z(t)

]T,

C Vp/oc = C Vp − C Voc =C Vp − C Voc +

C wF × (rp − roc) (4)

where C wF denotes the angular velocity of F as viewed by anobserver in C . The angular velocity of the camera relative to F

1In the entire development, it is assumed that ·e is omitted in vector repre-sentation where ·e denotes the column-vector representations of the vector inthe basis

ex, ey, ez

(i.e.F Vp =

F Vp

e

).

is expressed as F wC =[ωx ωy ωz

]T. Since C wF and F wC are

related as C wF =−F wC ,

C wF =[−ωx −ωy −ωz

]T. (5)

Substituting Eqs. (1) - (3) into Eq. (4) yields

X(t)Y (t)Z(t)

=

vpx − vcxvpy − vcyvpz − vcz

+

0 ωz −ωy−ωz 0 ωxωy −ωx 0

X(t)Y (t)Z(t)

,

=

vpx − vcx +ωzY (t)−ωyZ(t)vpy − vcy +ωxZ(t)−ωzX(t)vpz − vcz +ωyX(t)−ωxY (t)

(6)

The inhomogeneous coordinates of Eq. (1), m(t) =[m1(t) m2(t) 1

]T ∈ R3, are defined as

m(t) ,[

X(t)Z(t)

Y (t)Z(t) 1

]T. (7)

Considering subsequent development, the state vector x(t) =[x1(t) x2(t) x3(t)

]T ∈ Y ⊂ R3 is defined as

x(t) ,[X

ZYZ

1Z

]T. (8)

Using Eqs. (6) and (8), the dynamics of the state x(t) can beexpressed as

x1 = Ω1 + f1 + vpxx3 − x1vpzx3,

x2 = Ω2 + f2 + vpyx3 − x2vpzx3,

x3 = vczx23 +(x2ωx − x1ω2)x3 − vpzx2

3,

y =[

x1 x2]T

. (9)

where Ω1(u,y), Ω2 (u,y) , f1 (u,x) , f2 (u,x) , f3 (u,x)∈R are de-fined as

Ω1(u,y) , x1x2ωx −ωy − x21ωy + x2ωz,

Ω2(u,y) , ωx + x22ωx − x1x2ωy − x1ωz,

f1(u,x) , (x1vcz − vcx)x3,

f2(u,x) , (x2vcz − vcy)x3,

f3(u,x) , vczx23 +(x2ωx − x1ωy)x3.

Assumption 1: The velocities of the camera and object are as-sumed to be upper and lower bounded by constants.Assumption 2: Since the states x1(t) and x2(t) are equivalent to

2 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

the pixel coordinates in the image plane, and the size of imageplane is bounded by known constants, thus it can be assumedthat x1(t) and x2(t) are also bounded by W ≤ x1(t) ≤ W , H ≤x2(t) ≤ H where W ,W and H, H are obtained using the widthand height of image plane.

Camera Model and Geometric Image FormationIn order to describe the image formation process, the ge-

ometric perspective projection is commonly used as depicted inFig. 1. The projection model consists of an image plane Π, a cen-ter of projection oc, a center of image plane oI and the distancebetween Π and oc (focal length). Two-dimensional pixel coor-dinates of the projected point q in the image plane Π is givenby m(t) =

[u(t) v(t) 1

]T ∈ I ⊂ R3. The three-dimensional co-ordinates m(t) are related to the pixel coordinates m(t) by thefollowing relationship [9]

m(t) = Kcm(t) (10)

where Kc ∈R3×3 is an invertible upper-triangular form of intrin-sic camera matrix given by

Kc =

f 0 cx0 α f cy0 0 1

where α is the image aspect ratio, f is the focal length and(cx, cy) denotes the optical center oI expressed in pixel coordi-nates. To simplify the derivation of the perspective projectionmatrix, the projection center is assumed to coincide with the ori-gin of the camera reference frame (Assumption 3) and the opticalaxis is aligned with the z-axis of the coordinate system fixed inthe camera (Assumption 4).

DESIGN OF AN UNKNOWN INPUT OBSERVERAn unknown input observer for a class of nonlinear system is

designed to estimate the position of tracked object in [8]. Basedon Eqs. (6) and (8), the following nonlinear system can be con-structed

x = f (x,u)+g(y,u)+Dd

y = Cx (11)

where x(t) ∈R3 is a state of the system, y(t) ∈R2 is an output ofthe system, u(t) ∈ R6 is a measurable input, d(t) ∈ R is an un-measurable input, g(y,u) =

[Ω1 Ω2 0

]T is nonlinear in y(t) andu(t), and f (x,u) =

[f1 f2 f3

]T is nonlinear in x(t) and u(t) sat-isfying the Lipschitz condition ∥ f (x,u)− f (x,u)∥ ≤ γ1 ∥x− x∥

where γ1 ∈ R+. A full row rank of matrix C ∈ R2×3 is selectedas

C =

[1 0 00 1 0

],

and a full column rank of D ∈ R3×1 is chosen as D =[1 0 0

]T or[0 1 0

]T. The system in Eq. (11) can be written in

the following form :

x = Ax+ f (x,u)+g(y,u)+Dd

y = Cx (12)

where f (x,u) = f (x,u)−Ax and A ∈ R3×3. The function f (x,u)satisfies the Lipschitz condition [10, 11]

∥ f (x,u)− f (x,u)−A(x− x)∥ ≤ (γ1 + γ2)∥x− x∥ (13)

where γ2 ∈R+. A nonlinear unknown input observer for Eq. (12)is designed as

z = Nz+Ly+M f (x,u)+Mg(y,u)

x = z−Ey (14)

where x(t) ∈ R3 is an estimate of the unknown state x(t), z(t) ∈R3 is an auxiliary signal, the matrices N ∈ R3×3, L ∈ R3×2, E ∈R3×2, M ∈ R3×3 are designed as [12]

M = I3 +EC

N = MA−KC

L = K (I2 +CE)−MAE (15)

E =−D(CD)++Y(

I2 − (CD)(CD)†)

where (CD)† denotes the generalized pseudo inverse of the ma-trix CD. The gain matrix K ∈ R3×2 and matrix Y ∈ R3×2 areselected such that

Q , NT P+PN +(γ2

1 + γ22)

PMMT P+2I3 < 0 (16)

where P∈R3×3 is a positive definite, symmetric matrix. The sta-bility result of the observer is stated using the following theorem.Theorem: The nonlinear unknown input observer in Eq. (14) isexponentially stable if Eq. (16) is satisfied.

Proof: See theorem 1 of [8].To find E, K and P, Eq. (16) is reformulated in terms of a

linear matrix inequality (LMI) as [8]

[X11 βX12

βXT12 −I3

]< 0 (17)

3 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

where

X11 = AT (I3 +FC)T P+P(I3 +FC)A+ATCT GT PTY

+PY GCA−CT PTK −PKC+2I3

X12 = P+PFC+PFC+PY GC

PY = PY

PK = PK

β =√

γ21 + γ2

2.

To solve LMI feasibility problem in Eq. (17), the CVX tool-box in MATLAB is used [13]. Using P, PK and PY obtainedfrom Eq. (17), K and Y are computed using K = P−1PK andY = P−1PY .

If the number of outputs ny is equal to the number of un-known inputs nd , then the unknown disturbance can be rep-resented as Dd(t) = D1d1(t) + D2d2(t) where d1(t) includes(nd − 1) number of unknown inputs, d2(t) includes remainingunknown input and D1, D2 ∈ R3×1 are full column rank. In thiscase, the estimation error will be uniformly ultimately boundedwith the bound proportional to the norm of the disturbance d2(t).The gain K and Y can be selected optimally to minimize the ef-fects of sensor noise and the external disturbance by solving thefollowing LMI.

min δγ2 − (1−δ)λmin(P),

subject to P > 0, γ ≥ 0, X11 βX12 X13βXT

12 −I3 0XT

13 0 −γ2

< 0 (18)

where X13 = PD2, δ ∈ [0,1] , λmin denotes the minimum eigen-value of a matrix, and γ denotes the L2 norm of the estimationerror with respect to the disturbance d2 (t).

EXPERIMENTS AND RESULTSTo verify the designed unknown input observer [8] for real-

time implementation, two sets of experiments are conducted on aPUMA 560 serial manipulator and a two-link planar robot. Thefirst set is performed for the relative position estimation of a staticobject using a moving camera. The second set is performed forposition estimation of moving object. A schematic overview ofexperimental configuration is illustrated in Fig. 2.

Testbed SetupThe testbed consists of five components: (1) robot manipu-

lators, (2) camera, (3) image processing workstation (main), (4)robot control workstation (PUMA and two-link), and (5) serialcommunication. Figure 3 shows the experimental platforms. Acamera is rigidly fixed to the end-effector of the PUMA 560. The

Figure 2. AN OVERVIEW OF EXPERIMENTAL CONFIGURATION.

Figure 3. PLATFORMS.

PUMA and the two-link robot are rigidly attached to a work ta-ble. Experiments are conducted to estimate the position of thestatic as well as the moving object. A fiduciary marker is usedas an object in all the experiments. For experiments involvinga static object, the object is fixed to the work table. For experi-ments involving a moving object, the object is fixed to the end-effector of the two-link robot which follows a desired trajectory.The PUMA 560 is used to move the camera while observing thestatic or moving object.

A mvBlueFox-120a color USB camera is used to captureimages. The camera is calibrated using the MATLAB cameracalibration toolbox [14]. A Core2-Duo 2.53 GHz laptop (main-workstation) operating under Windows 7 is used to carry out theimage processing and to store data transmitted from the PUMA-workstation. The image processing algorithms are written inC/C++, and developed in Microsoft Visual Studio 2008. TheOpenCV and MATRIX-VISION API libraries are used to cap-ture the images and to implement a KLT feature point tracker[15, 16]. Sub-workstations (PUMA and two-link) are composedof two Pentium 2.8 GHz PCs operating under QNX. These two

4 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

computers are used to host control algorithms for the PUMA560 and the two-link robot via Qmotor 3.0 [17]. A PID con-troller is employed to control the six joints of the PUMA 560.A RISE-based controller [18] is applied to control the two-linkrobot. Control implementation and data acquisition for the tworobots are operated at 1.0 kHz frequency using the ServoToGoI/O board. The forward velocity kinematics [19, 20] are used toobtain the position and velocity of the camera and tracked point.The camera velocities computed on the PUMA-workstation aretransmitted to the main-workstation via serial communication at30 Hz. The pose (position and orientation) of the tracked pointand the camera are computed and stored in the sub-workstationsat 1.0 KHz. The position of the camera and the point are used tocompute the ground truth distance between the camera and object

as

rob j/cam

e =[RE

e

]−1rob j − rcam

E where RE

e ∈ R3×3

is the rotation matrix of the camera with respect to the inertialreference frame.

Experiment I : Moving camera with a static point

In this section, the structure estimation algorithm is imple-mented for a static object observed using a moving camera. Atracked point on the static object is observed by a downward-looking camera. Since vpx, vpy and vpz are zero for a static object,the unmeasurable disturbance input d(t) is zero. An experimentset is designed to test the observer with time-varying velocitiesof camera in the Y and Z direction. Figures 4 and 5 show thelinear and angular camera velocities. The matrices A, C and Dare selected to be

A =

0.00 −0.05 −1.00.05 0.00 0.00.00 0.00 0.00

, C =

[1 0 00 1 0

], D =

010

.

The matrix Y and gain matrix K are computed using the CVXtoolbox in MATLAB [13] as

K =

1.3149 0.00000.0000 1.31490.0590 −0.1256

, Y =

−1.0000 0.00000.0000 0.00002.5125 0.0000

.

The estimation result is illustrated in Figs. 6 and 7. The steady-state RMS errors in the position estimation in X , Y and Z coor-dinates are 0.0029 m, 0.0210 m and 0.0412 m, respectively.

0 1 2 3 4 5 6 7 8 9−0.07

−0.06

−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

Time (sec)

The

ang

ular

vel

ocity

of c

amer

a (r

ad/s

ec)

wxwy

wz

Figure 4. CAMERA ANGULAR VELOCITY.

0 1 2 3 4 5 6 7 8 9−0.005

0

0.005

0.01

0.015

0.02

0.025

0.03

Time (sec)

The

line

ar v

eloc

ity o

f cam

era

(m/s

ec)

v

cxv

cyv

cz

Figure 5. CAMERA LINEAR VELOCITY.

0 1 2 3 4 5 6 7 8 9−0.5

0

0.5

Time (sec)

x (m

)

0 1 2 3 4 5 6 7 8 90

0.2

0.4

Time (sec)

y (m

)

0 1 2 3 4 5 6 7 8 90

1

2

Time (sec)

z (m

)

Figure 6. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED(SOLID) POSITION OF A STATIC POINT WITH RESPECT TO A MOV-ING CAMERA.

5 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

0 1 2 3 4 5 6 7 8 9−0.2

0

0.2

0.4

0.6

0.8

1

1.2

Time (sec)

Err

ors

in p

ositi

on e

stim

atio

n (m

)

exey

ez

Figure 7. POSITION ESTIMATION ERROR FOR A STATIC POINT.

Experiment II : Moving camera with moving objectIn this section, the observer is used to estimate the position

of a moving object using a moving camera. A downward-lookingcamera observes a moving point fixed to the moving two-linkrobot arm. In this case, the object is moving in the X −Y planewith unknown velocities vpx(t) and vpy(t). The estimation resultusing the observer gain obtained from the method in [8] and fromthe convex optimization method are compared. In the experimentSet 2, the linear velocity of camera has two time-varying veloc-ities to test the observer with more generalized trajectory of themoving camera.

Set 1 In this experiment set, the observer is tested witha time-varying linear velocity of camera along the X direction.The camera velocities are shown in Figs. 8 and 9. The matricesA, C and D are given by

A =

0.00 −0.05 0.000.05 0.00 −0.300.00 0.00 0.00

, C =

[1 0 00 1 0

], D =

100

.

The matrix Y and gain matrix K are computed using the CVXtoolbox in MATLAB and are given as

K =

1.2204 0.00000.0000 1.22040.3731 0.0000

, Y =

0.0000 0.00000.0000 −1.00000.0000 7.4618

.

The estimation result is illustrated in Figs. 10 and 11. The RMSerrors and peak errors of the estimated position in the steady-state are given in Tab. 1.

0 1 2 3 4 5 6 7−0.06

−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

Time (sec)

The

ang

ular

vel

ocity

of c

amer

a (r

ad/s

ec)

wxwy

wz

Figure 8. CAMERA ANGULAR VELOCITY.

0 1 2 3 4 5 6 7−0.06

−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

Time (sec)

The

line

ar v

eloc

ity o

f cam

era

(m/s

ec)

v

cxv

cyv

cz

Figure 9. CAMERA LINEAR VELOCITY.

0 1 2 3 4 5 6 7−0.5

0

0.5

Time (sec)

x (m

)

0 1 2 3 4 5 6 7−0.1

−0.05

0

Time (sec)

y (m

)

0 1 2 3 4 5 6 70

0.5

1

Time (sec)

z (m

)

Figure 10. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED(SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOV-ING CAMERA.

6 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

0 1 2 3 4 5 6 7

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

Time (sec)

Err

ors

in p

ositi

on e

stim

atio

n (m

)

exey

ez

Figure 11. POSITION ESTIMATION ERROR FOR A MOVING POINT.

Set 1 with convex optimization approach In thisexperiment, a comparison between the gain computation usingLMIs in Eqs. (17) and (18) is given. The matrix Y and gainmatrix K for convex optimization are computed using the CVXtoolbox as

K =

0.0003 0.00000.0000 0.38860.0000 0.0000

, Y =

0.0000 0.00000.0000 −1.00000.0000 3.0000

.

The estimation result is illustrated in Figs. 13 and 12. Table 1shows the steady-state RMS error and peak error of the positionestimates using the convex optimization approach.

0 1 2 3 4 5 6 7−0.5

0

0.5

Time (sec)

x (m

)

0 1 2 3 4 5 6 7

−0.04

−0.02

0

Time (sec)

y (m

)

0 1 2 3 4 5 6 70

0.5

1

Time (sec)

z (m

)

Figure 12. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED(SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOV-ING CAMERA.

0 1 2 3 4 5 6 7

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

Time (sec)

Err

ors

in p

ositi

on e

stim

atio

n (m

)

exey

ez

Figure 13. POSITION ESTIMATION ERROR FOR A MOVING POINT.

Table 1. POSITION ESTIMATION ERRORS.

w/o optimization w/ optimization

RMS in x (m) 0.0345 0.0227

RMS in y (m) 0.0031 0.0016

RMS in z (m) 0.0740 0.0416

peak error in x (m) 0.1273 0.0566

peak error in y (m) 0.0093 0.0036

peak error in z (m) 0.2385 0.0857

Set 2 In this experiment set, the linear camera velocitiesalong the X and Y direction are time-varying, and the cameraangular camera velocity is constant. The camera velocities aredepicted in Figs. 14 and 15. The matrices A, C and D are givenby

A =

0.00 −0.05 −1.000.05 0.00 −0.300.00 0.00 0.00

, C =

[1 0 00 1 0

], D =

100

.

The matrix Y and gain matrix K are computed as

K =

1.2892 0.00000.0000 1.28920.2313 0.0000

, Y =

0.0000 0.00000.0000 −1.00000.0000 4.6261

.

The estimation result is depicted in Figs. 16 and 17. The steady-state RMS errors and peak errors of the position estimation in X ,Y and Z coordinates are given in Tab. 2.

7 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

0 1 2 3 4 5 6 7 8 9−0.07

−0.06

−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

Time (sec)

The

ang

ular

vel

ocity

of c

amer

a (r

ad/s

ec)

wxwy

wz

Figure 14. CAMERA ANGULAR VELOCITY.

0 1 2 3 4 5 6 7 8 90.005

0.01

0.015

0.02

0.025

0.03

0.035

Time (sec)

The

line

ar v

eloc

ity o

f cam

era

(m/s

ec)

v

cxv

cyv

cz

Figure 15. CAMERA LINEAR VELOCITY.

0 1 2 3 4 5 6 7 8 9−0.2

0

0.2

Time (sec)

x (m

)

0 1 2 3 4 5 6 7 8 9−0.1

0

0.1

Time (sec)

y (m

)

0 1 2 3 4 5 6 7 8 90

0.5

1

Time (sec)

z (m

)

Figure 16. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED(SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOV-ING CAMERA.

0 1 2 3 4 5 6 7 8 9

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

Time (sec)

Err

ors

in p

ositi

on e

stim

atio

n (m

)

exey

ez

Figure 17. POSITION ESTIMATION ERROR FOR A MOVING POINT.

Set 2 with convex optimization approach In thisexperiment, a comparison between the gain computation usingLMIs in Eqs. (17) and (18) is given. The matrix Y and gainmatrix K for convex optimization are computed using the CVXtoolbox as

K =

0.0003 0.00000.0000 0.38860.0000 0.0000

, Y =

0.0000 0.00000.0000 −1.00000.0000 3.0000

.

The estimation result is illustrated in Figs. 19 and 18. Table 2shows the RMS error and peak error of the position estimatesusing the convex optimization approach.

0 1 2 3 4 5 6 7 8 9−0.2

0

0.2

Time (sec)

x (m

)

0 1 2 3 4 5 6 7 8 9−0.1

0

0.1

Time (sec)

y (m

)

0 1 2 3 4 5 6 7 8 90

0.5

1

Time (sec)

z (m

)

Figure 18. COMPARISON OF THE ACTUAL (DASH) AND ESTIMATED(SOLID) POSITION OF A MOVING POINT WITH RESPECT TO A MOV-ING CAMERA.

8 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

0 1 2 3 4 5 6 7 8 9

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

Time (sec)

Err

ors

in p

ositi

on e

stim

atio

n (m

)

exey

ez

Figure 19. POSITION ESTIMATION ERROR FOR A MOVING POINT.

Table 2. POSITION ESTIMATION ERRORS.

w/o optimization w/ optimization

RMS in x (m) 0.0067 0.0044

RMS in y (m) 0.0216 0.0208

RMS in z (m) 0.0645 0.0730

peak error in x (m) 0.0258 0.0172

peak error in y (m) 0.0334 0.0249

peak error in z (m) 0.2257 0.1330

CONCLUSIONIn this work, the unknown input observer in [8] is imple-

mented to estimate the position of a static and a moving object.For the static object case, the number of unknown inputs are lessthan the number of measured outputs. Thus, the observer expo-nentially converges to the true state. For the moving object case,when the number of disturbance inputs is equal to the numberof outputs, the observer yields an uniformly ultimately boundedresult (cf., Set 2 of Experiment 2). Yet, the observer yields RMSerror within 0.075 m precision in the presence of sensor noise infeature tracking and camera velocities. The observer yields goodperformance for detecting the coordinates for the static as wellas moving objects as seen from the RMS errors in each exper-iment. As illustrated in Tabs. 1 and 2, the convex optimizationapproach reduces the effect of noise in the state estimation whilethe transient time increases. The convex optimization methodcan be used to obtain an optimal observer gain for more generalcamera and object motions.

ACKNOWLEDGMENTThis research is supported in part by NSF award num-

bers 0547448, 0901491, 1161260, and a contract with the AirForce Research Laboratory, Munitions Directorate at Eglin AFB.Any opinions, findings and conclusions or recommendations ex-pressed in this material are those of the author(s) and do not nec-essarily reflect the views of the sponsoring agency.

References[1] Avidan, S., and Shashua, A., 2000. “Trajectory triangula-

tion: 3D reconstruction of moving points from a monocularimage sequence”. IEEE Trans. Pattern Anal. Mach. Intell.,22(4), Apr, pp. 348–357.

[2] Kaminski, J., and Teicher, M., 2004. “A general frameworkfor trajectory triangulation”. J. Math. Imag. Vis., 21(1),pp. 27–41.

[3] Han, M., and Kanade, T., 2004. “Reconstruction of a scenewith multiple linearly moving objects”. Int. J. Comput. Vi-sion, 59(3), pp. 285–300.

[4] Vidal, R., Ma, Y., Soatto, S., and Sastry, S., 2006. “Two-view multibody structure from motion”. Int. J. Comput.Vision, 68(1), pp. 7–25.

[5] Yuan, C., and Medioni, G., 2006. “3D reconstruction ofbackground and objects moving on ground plane viewedfrom a moving camera”. In Comput. Vision Pattern Recon-gnit., Vol. 2, pp. 2261 – 2268.

[6] Park, H., Shiratori, T., Matthews, I., and Sheikh, Y., 2010.“3D reconstruction of a moving point from a series of 2Dprojections”. In Euro. Conf. on Comp. Vision, Vol. 6313,pp. 158–171.

[7] Dani, A., Kan, Z., Fischer, N. R., and Dixon, W. E., 2010.“Structure and motion estimation of a moving object usinga moving camera”. In Proc. Am. Control Conf., pp. 6962–6967.

[8] Dani, A. P., Kan, Z., Fischer, N. R., and Dixon, W. E.,2011. “Structure estimation of a moving object using amoving camera: An unknown input observer approach”.IEEE Conference on Decision and Control and EuropeanControl Conference (CDC-ECC), pp. 5005–5010.

[9] Szeliski, R., 2010. Computer Vision: Algorithms and Ap-plications. Springer.

[10] Yaz, E., and Azemi, A., 1993. “Observer design for discreteand continuous nonlinear stochastic systems”. Int. J. Syst.Sci., 24(12), pp. 2289–2302.

[11] Xie, L., and Khargonekar, P. P., 2010. “Lyapunov-basedadaptive state estimation for a class of nonlinear stochasticsystems”. In Proc. of American Controls Conf., pp. 6071–6076.

[12] Darouach, M., Zasadzinski, M., and Xu, S., 1994. “Full-order observers for linear systems with unknown inputs”.IEEE Trans. Autom. Control, 39(3), Mar., pp. 606–609.

[13] Grant, M., and Boyd, S., 2005. Cvx: Matlab software for

9 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms

disciplined convex programming. On the WWW. URLhttp://cvxr.com/cvx/.

[14] Bouguet, J., 2010. Camera calibration tool-box for matlab. On the WWW. URLhttp://www.vision.caltech.edu/bouguetj/.

[15] Tomasi, C., and Kanade, T., 1991. Detection and trackingof point features. Tech. rep., Carnegie Mellon University.

[16] Shi, J., and Tomasi, C., 1994. “Good features to track”. InProc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 593–600.

[17] Loffler, M., Costescu, N., and Dawson, D., 2002. “Qmotor3.0 and the qmotor robotic toolkit - an advanced PC-basedreal-time control platform”. IEEE Contr. Syst. Mag., 22(3),pp. 12–26.

[18] Patre, P. M., Dupree, K., MacKunis, W., and Dixon, W. E.,2008. “A new class of modular adaptive controllers, partII: Neural network extension for non-LP systems”. In Proc.Am. Control Conf., pp. 1214–1219.

[19] Spong, M., and Vidyasagar, M., 1989. Robot Dynamics andControl. John Wiley & Sons Inc., New York.

[20] Crane, C. D., and Duffy, J., 1998. Kinematic Analysis ofRobot Manipulators. Cambridge.

10 Copyright © 2012 by ASME

Downloaded From: http://asmedigitalcollection.asme.org/ on 11/01/2013 Terms of Use: http://asme.org/terms


Top Related