+ All Categories
Home > Documents > Design of Schmidt-Kalman filter for target tracking with navigation errors

Design of Schmidt-Kalman filter for target tracking with navigation errors

Date post: 11-Nov-2023
Category:
Upload: wright
View: 0 times
Download: 0 times
Share this document with a friend
12
1 Design of Schmidt-Kalman Filter for Target Tracking with Navigation Errors Chun Yang Sigtem Technology, Inc. San Mateo, CA 94402 [email protected] Erik Blasch Air Force Research Lab/RYAA WPAFB, OH 45433 [email protected] Phil Douville Air Force Research Lab/RYAA WPAFB, OH 45433 [email protected] Abstract—In most target tracking formulations, the tracking sensor location is typically assumed perfectly known. Without accounting for navigation errors of the sensor platform, regular Kalman filters tend to be optimistic (i.e., the covariance matrix far below the actual mean squared errors). In this paper, the Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors. The SKF does not estimate the navigation errors explicitly but rather takes into account (i.e., considers) the navigation error covariance provided by an on-board navigation unit in the tracking filter formulation. Including the navigation errors leads to the so-called “consider covariance.” By exploring the structural navigation errors, the SKF is not only more consistent but also produces smaller mean squared errors than regular Kalman filters. Monte Carlo simulation results are presented in the paper to demonstrate the operation and performance of the SKF for target tracking in the presence of navigation errors. 1 , 2 TABLE OF CONTENTS 1. INTRODUCTION ………………………………………. 1 2. TARGET TRACKING FORMULATION ………………… 2 3. CONSIDER COVARIANCE IN TRACKING FILTER .….… 3 4. RECURSIVE CALCULATION OF METRICS ………….…. 5 5. SIMULATION RESULTS AND ANALYSIS ………………. 6 6. CONCLUSIONS …………………………………...…… 11 ACKNOWLEDGEMENT ...…………………………………11 REFERENCES …………………………………………..... 11 BIOGRAPHY …………………………………………....... 12 1. INTRODUCTION Navigation errors are a performance-limiting factor for target tracking as ranging measurements increase in accuracy. In the past, a sensor platform is typically assumed to be at a perfectly known location when formulating the measurement equations for tracking filters. Such an assumption is reasonable when the tracking goal is to determine the relative location of a target for such applications as a homing missile. However, for other applications such as sensor data fusion or weapons cueing, the sensor/platform location accuracy is critical for high 1 978-1-4244-3888-4/10/$25.00 ©2010 IEEE 2 IEEEAC paper #1081, Version 3, Updated January 5, 2010 speed maneuvering target tracking as the target’s relative location has to be transferred into a global reference frame and the sensor navigation errors get magnified into the target position solution. For tracking mission success, it requires the use of such techniques as the Schmidt-Kalman Filter (SKF) [9] to account for navigation errors. Without accounting for navigation errors, regular Kalman filters tend to be optimistic in that the covariance matrix reaches a steady-state value (if not diverged) far below the actual mean squared errors. Heuristic techniques such as the α-β-γ filter are used to prevent divergence and boost filter gain by adjusting Q and R covariance parameters. In [6, 7], it was shown that residual biases in radar measurements could be mitigated using the Schmidt- Kalman filter. In the setting of tracking ballistic missiles in midcourse, ground-based electronically scanned array radars (ESA) were considered. The range and off-boresight angles of a target measured in the radar frame are transformed into relative position measurements in a local east-north-up (ENU) frame. It was assumed that the measurement process is subject to frame transformation errors (in terms of Euler angles) and unknown but constant measurement errors (in terms of biases and scale factors) in addition to random noise. It was further assumed that a bias estimator was available to remove the angular and measurement biases. Further, zero-mean Gaussian residual biases could be mitigated so long as the residual bias covariance matrix was provided by the bias estimator. In this paper, the Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors. The Schmidt Kalman filter does not estimate the navigation errors explicitly (it could if external reference data were available) but rather takes into account (i.e., consider) the navigation error covariance, provided by an on-board navigation unit, into the tracking filter formulation. By exploring the structural navigation errors, the SKF are not only more consistent but also produces smaller mean squared errors than regular Kalman filters. The paper is organized as follows. The conventional tracking filter is formulated in Section 2. The design of the SKF using the consider covariance in the presence of navigation errors is presented in Section 3. The recursive
Transcript

1

Design of Schmidt-Kalman Filter for Target Tracking with Navigation Errors

Chun Yang Sigtem Technology, Inc. San Mateo, CA 94402 [email protected]

Erik Blasch Air Force Research Lab/RYAA

WPAFB, OH 45433 [email protected]

Phil Douville Air Force Research Lab/RYAA

WPAFB, OH 45433 [email protected]

Abstract—In most target tracking formulations, the tracking sensor location is typically assumed perfectly known. Without accounting for navigation errors of the sensor platform, regular Kalman filters tend to be optimistic (i.e., the covariance matrix far below the actual mean squared errors). In this paper, the Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors. The SKF does not estimate the navigation errors explicitly but rather takes into account (i.e., considers) the navigation error covariance provided by an on-board navigation unit in the tracking filter formulation. Including the navigation errors leads to the so-called “consider covariance.” By exploring the structural navigation errors, the SKF is not only more consistent but also produces smaller mean squared errors than regular Kalman filters. Monte Carlo simulation results are presented in the paper to demonstrate the operation and performance of the SKF for target tracking in the presence of navigation errors.1,2

TABLE OF CONTENTS 1. INTRODUCTION ………………………………………. 1 2. TARGET TRACKING FORMULATION ………………… 2 3. CONSIDER COVARIANCE IN TRACKING FILTER .….… 3 4. RECURSIVE CALCULATION OF METRICS ………….…. 5 5. SIMULATION RESULTS AND ANALYSIS ………………. 6 6. CONCLUSIONS …………………………………...…… 11 ACKNOWLEDGEMENT ...…………………………………11 REFERENCES …………………………………………..... 11 BIOGRAPHY …………………………………………....... 12

1. INTRODUCTION Navigation errors are a performance-limiting factor for target tracking as ranging measurements increase in accuracy. In the past, a sensor platform is typically assumed to be at a perfectly known location when formulating the measurement equations for tracking filters. Such an assumption is reasonable when the tracking goal is to determine the relative location of a target for such applications as a homing missile. However, for other applications such as sensor data fusion or weapons cueing, the sensor/platform location accuracy is critical for high

1 978-1-4244-3888-4/10/$25.00 ©2010 IEEE 2 IEEEAC paper #1081, Version 3, Updated January 5, 2010

speed maneuvering target tracking as the target’s relative location has to be transferred into a global reference frame and the sensor navigation errors get magnified into the target position solution. For tracking mission success, it requires the use of such techniques as the Schmidt-Kalman Filter (SKF) [9] to account for navigation errors. Without accounting for navigation errors, regular Kalman filters tend to be optimistic in that the covariance matrix reaches a steady-state value (if not diverged) far below the actual mean squared errors. Heuristic techniques such as the α-β-γ filter are used to prevent divergence and boost filter gain by adjusting Q and R covariance parameters. In [6, 7], it was shown that residual biases in radar measurements could be mitigated using the Schmidt-Kalman filter. In the setting of tracking ballistic missiles in midcourse, ground-based electronically scanned array radars (ESA) were considered. The range and off-boresight angles of a target measured in the radar frame are transformed into relative position measurements in a local east-north-up (ENU) frame. It was assumed that the measurement process is subject to frame transformation errors (in terms of Euler angles) and unknown but constant measurement errors (in terms of biases and scale factors) in addition to random noise. It was further assumed that a bias estimator was available to remove the angular and measurement biases. Further, zero-mean Gaussian residual biases could be mitigated so long as the residual bias covariance matrix was provided by the bias estimator. In this paper, the Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors. The Schmidt Kalman filter does not estimate the navigation errors explicitly (it could if external reference data were available) but rather takes into account (i.e., consider) the navigation error covariance, provided by an on-board navigation unit, into the tracking filter formulation. By exploring the structural navigation errors, the SKF are not only more consistent but also produces smaller mean squared errors than regular Kalman filters. The paper is organized as follows. The conventional tracking filter is formulated in Section 2. The design of the SKF using the consider covariance in the presence of navigation errors is presented in Section 3. The recursive

2

calculation of performance metrics in terms of root mean squared (RMS) errors and root sample covariance trace (RSCT) is described in Section 4. Monte Carlo simulation results are presented in Section 5 to demonstrate the operation and performance of the SKF for target tracking in the presence of navigation errors. Finally Section 6 concludes the paper with an outline of future work.

2. TARGET TRACKING FORMULATION Consider a linear time-invariant discrete-time dynamic system together with its measurement as

kkkk wBuAxx ++=+1 (1a) kkk vHxy += (1b)

where the subscript k is the time index, x is the state vector, u is a known input, y is the measurement, and w and v are state and measurement noise processes, respectively. It is implied that all vectors and matrices have compatible dimensions, which are omitted for simplicity. The goal is to find an estimate, denoted by kx̂ , of xk given the measurements up to time k, denoted by Yk = {y0, …, yk}. Under the assumptions that the state and measurement noises are uncorrelated zero-mean white Gaussian with w ~ N{0, Q} and v ~ N{0, R} where Q and R are positive semi-definite covariance matrices, the Kalman filter provides an optimal estimator in the form of }|{ˆ kkk YE xx = [1, 2, 3, 4]. Starting from an initial estimate }{ˆ 00 xx E= and its estimation error covariance })ˆ)(ˆ{( 00000

TE xxxxP −−= where the superscript T stands for matrix transpose, the Kalman filter equations specify the propagation of kx̂ and Pk over time as well as the update of kx̂ and Pk by measurement yk as

kkk BuxAx +=+ ˆ1 (2a) QAAPP +=+

Tkk 1

(2b) )(ˆ 11111 +++++ −+= kkkkk xHyKxx (2c) 111 )( +++ −= kkk PHKIP (2d) 1

111−

+++ = kT

kk SHPK (2e) RHPHS += ++

Tkk 11 (2f)

where

1+kx and 1+kP are the predicted state and prediction

error covariance, respectively, and Sk is the measurement prediction error covariance.

In practice, however, the target measurements are nonlinear in nature such as range and bearing. Consider a general nonlinear measurement taken from the sensor at xs to a target at xt as vfz st += ),( xx (3) where f (⋅, ⋅) is the nonlinear measurement equation and v is the measurement error assumed to be a zero-mean Gaussian N(0, σ2). Assume that the sensor location xs is known perfectly and an initial estimate of the unknown target location, denoted by x0, is available. The nonlinear measurement can be linearized around this initial estimate as: vfz tTst +−+≈ )(),( 0xxhxx (4a)

021 x

h ⎥⎦

⎤⎢⎣

∂∂

∂∂

∂∂= t

ntt

T

xz

xz

xz (4b)

In terms of measurement prediction error, the equation can be further written in terms of x = xt – x0 as: vfzz TTs +=+−= xhxhxx 00 ),(~ (5) For M measurements, the linearized measurements can be put into a vector format as: vHxz +=~ (6a) [ ]M

T zzz ~~~~21=z (6b)

[ ]M

T vvv 21=v (6c)

⎥⎥⎥

⎢⎢⎢

=TM

T

h

hH

1 (6d)

The use of linearized measurements (6a) is conformal with the linear formulation (1b) and its application to (2) leads to the extended Kalman filter (EKF) [1, 2, 3, 4]. In the two dimensional setting with ranging measurements, let x and y be the coordinate axes. Then, xt = [xt, yt]T and xs = [xs, ys]T. The nonlinear equation (3) for range is (ignoring the noise terms) 22 )()(),( ststst yyxxfr −+−== xx (7a) The corresponding linearized equation around an initial estimate x0 of xt is )(),( 00 xxhxx −+≈ tTsfr (7b)

3

with

⎥⎦

⎤⎢⎣

⎡=

⎥⎥⎥⎥

⎢⎢⎢⎢

=)sin()cos(

0

0

θθ

ryy

rxx

s

s

h (7c)

where θ is the angle relative to the x-axis. Eq. (7c) affords an important interpretation that the observation matrix is made of the line of sight (LOS) vector from the sensor to target [12]. 3. CONSIDER COVARIANCE IN TRACKING FILTER In the above formulation, the sensor location xs is assumed to be known perfectly, which is approximately true when the ranging errors are much larger than the sensor positioning errors. However, when the ranging errors are comparable to the sensor positioning errors, it is required to account for navigation errors explicitly to ensure tracking performance. Assume that an onboard navigation system provides the sensor’s location at xs, which is subject to an unknown instantaneous error δxs and a location error covariance Ps. Further assume that the sensor provides range measurements given by rst nr +−=

2xx (8)

where xt is the target location, ||▪||2 stands for the 2-norm, and nr is the measurement noise, which is assumed to be of zero mean with a variance of σ2. Given the measured sensor location sx and predicted target location tx , the predicted range is obtained as

2

str xx −= (9)

The range equation is expanded around the measured sensor location and predicted target location as rssT

sttT

t nrrrr +−∂∂+−

∂∂+= )()()()( xx

xxx

x (10a)

rsT

stT

t nrrr +∂∂+

∂∂+= x

xx

xδδ )()( (10b)

where the target position error is δxt. 3.1 Without Sensor Position Errors In conventional EKF formulation as in (4a) and (7a), the sensor position errors are ignored. The resulting linearized measurement equation is

rttT nrrr +=−= xh δδ (11a)

tt r

xh

∂∂= (11b)

3.2 With Sensor Position Errors To account for the sensor positioning errors, consider three cases: zero mean, constant bias, and non-stationary. Case 1: Zero Mean A well-behaved navigation system provides a navigation solution whose error is uncorrelated (with itself over time and with the target state estimation error), with a zero mean E{δxs} = 0 and covariance Ps. The concept of consider covariance as used in [5, 10] is the central idea of the Schmidt-Kalman filter. The consider covariance for the measurement equation (10) is given by RhPhhPhS ++= sssTtttT (12) where Pt is the target state error covariance, Ps is the sensor location error covariance, and R is the measurement error covariance. Case 2: Constant Bias However, if the measurement exhibits a constant bias, the bias can be estimated directly if an external reference measurement is available. Alternatively, it can be estimated by a separate bias estimator. Any residual bias is accounted for by the consider covariance as suggested in [6, 7]. To derive the Schmidt-Kalman filter, the bias is taken as an augmented state in the filter formulation for the state and covariance propagation. However, the update equations for the augmented state as nuisance parameters are discarded. More specifically, the augmented equations are given by

⎥⎦

⎤⎢⎣

⎡+⎥

⎤⎢⎣

⎡+⎥

⎤⎢⎣

⎡⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

+

+

0000

1

1 kk

k

k

k

k wu

Bβx

IA

βx (13a)

[ ] kk

kstk vβx

HHy +⎥⎦

⎤⎢⎣

⎡= (13b)

The bias βk is not estimated but its effect on xk is accounted for through a guess about the bias in terms of its covariance matrix Cov{βk} = Ps

k and its correlation with the target state estimate defined as })ˆ{( T

kkktsk E βxxP −= , 00 =tsP (14)

The augmented state covariance matrix is given by

4

⎥⎦

⎤⎢⎣

⎡= s

kTts

k

tsk

tk

k PPPP

)(P (15)

The Schmidt-Kalman filter is then obtained by applying the Kalman filter to the augmented equations (13) with the augmented covariance (15). The time propagation equations of the target state and its error covariance remain the same as (2a) and (2b). The time propagation of the bias covariance is unity: Ps

k+1 = Psk. The time propagation of the

cross-correlation is given by ts

kts

k APP =+1 (16) The measurement update equation of the target state still has the same form as (2c) but the gain matrix (2e) is different as given below 1

1111 )( −++++ += k

sTtsk

tTtkk SHPHPK (17a)

where the predicted measurement error covariance Sk+1 is given by tTtsT

kstTt

kt

k HPHHPHS 111 ( +++ += )111 +++ +++ k

tTsTk

ssTtsk

t RHPHHPH (17b) However, the measurement update equation of the target state error covariance is given by tsT

ks

kkt

kk 11111 )( +++++ −−= PHKPHKIP (18a) The measurement update of the cross-correlation is given by )( 11111

sk

sk

tk

tsk

tsk +++++ +−= PHPHKPP (18b)

From (17b), it can be seen that if there is no correlation, that is, 0=ts

kP , it reduces to (12), which is a form of covariance inflation. This may occur when the sensor position is subject to zero-mean uncorrelated errors. In general, however, even starting with a null initialization, that is, 00 =tsP as in (14), the cross-correlation for non-zero bias is not zero as seen from (18b). Case 3: Non-Stationary Errors For many practical navigation systems, however, the error characteristics of the navigation solution are non-stationary and correlated. This is mainly due to the fact that most aided navigation systems utilize a Kalman filter to integrate an inertial navigation solution with various navaids such as GPS to curb the solution error growth while calibrating inertial sensor errors. For an unaided solution in short terms, the position error grows with time quadratically (~ t2) under accelerometer

bias while (~ t3) cubically under gyro drift. The horizontal position errors oscillate at a Schuler period of 84.4 min (the Schuler frequency ωs = g/R0 = 0.00123 rads/s) near the earth surface whereas the vertical position (height) errors grow unbounded. Navaids such as GPS and baro altimeter are employed to limit the navigation solution errors. The raw data rates of inertial sensors are typically at 100 ~ 200 Hz whereas the navaids are available at 10 Hz to 1 Hz or less frequently. Between two navaid updates, the errors grow. The updated error covariance exhibits a sawtooth pattern as exemplified in Figures 1 and 2 for the position and velocity components, respectively.

Fig. 1 Integrated Inertial: Position Errors

Fig. 2 Integrated Inertial: Velocity Errors In the steady state, if the update rate of the target tracking algorithm is commensurate with that of the navaid update, a uniform quality of the integrated navigation solution can be expected. If the update interval of the target tracking algorithm is much longer than that of the navaid update, the navigation errors can be considered uncorrelated in time. Otherwise, variable quality can occur as indicated by the navaid integration filter’s covariance matrix Ps and the navigation errors may be correlated if the target tracking

5

update interval is comparable to or shorter than the navaid update interval. To deal with variable and/or correlated errors, one way is to introduce a first-order Gauss-Markov model for the bias, which is no longer constant but slow-varying. This aspect will be investigated as part of our future work.

4. RECURSIVE CALCULATION OF METRICS In Section 5, the Monte Carlo simulation will be used to evaluate the performance of a Schmidt-Kalman filter vs. a conventional Kalman filter in the presence of navigation errors. The performance metrics will be root mean squared (RMS) errors when the state truth is known and the root sample covariance trace (RSCT) when the state truth is not known. Assume that a vector quantity x(k) ~ N(μ, Σ) at time k is estimated N times as in a Monte Carlo simulation. The estimate is denoted by )(ˆ kix for i = 1, …, N. There are two ways to characterize the estimation errors. One way is to consider the sample mean and sample covariance defined as ∑

=

=N

iiN k

Nk

1

)(ˆ1)( xm (19a)

=

−−=N

i

TNiNiN kkkk

Nk

1)]()(ˆ)][()(ˆ[1)( mxmxS (19b)

If the denominator in (19b) is (N-1), subsequent equations need change. However, the approximation error is small for large N. The sample covariance SN(k) does not tell how accurate the estimates are (i.e., how far away it is from the true value) but rather provides an indication about the spreading of the estimates around the sample mean mN(k) (i.e., its precision). Consider a bias estimate in the estimate )()()( kkk NN xmb −= (20) The ellipses drawn with {mN(k), SN(k)} are often called confidence ellipses. The statistic )()( 12 μmPμm −−= −TnT (21) follows the so-called Hotellings’ distribution with n-1 degrees of freedom [8] where n is the dimension of x. It can be easily shown that the sample mean and covariance can be calculated recursively inside a Monte Carlo simulation loop as ∑

=

=n

iin k

nk

1

)(ˆ1)( xm )(ˆ1)(11 k

nk

nn

nn xm +−= − (22)

=

−−=n

i

Tninin kkkk

nk

1

)]()(ˆ)][()(ˆ[1)( mxmxS (23a)

T

nnnnn kkkkn

kn

n )]()(ˆ)][()(ˆ[1)(11 mxmxS −−+−= −

T

nnnn kkkkn

n )]()()][()([111 −− −−−+ mmmm (23b)

In contrast to (19), instead of using the sample mean, another widely used covariance is calculated with respect to the true vector x(k) if available as ∑

=

−−=N

i

TiiN kkkk

Nk

1

)]()(ˆ)][()(ˆ[1)( xxxxP (24)

The square root of the trace of P is the so-called root mean squared (RMS) errors defined as

∑=

−−==N

ii

TiNN kkkk

NktracekRMS

1

)]()(ˆ[)]()(ˆ[1)}({)( xxxxP

(25) In calculating (24) and (25), the truth x(k) is assumed to be known, which is a preferred choice for performance evaluation in simulation. Without modeling errors, the Kalman filter provides the estimation error covariance as defined in (24). The error covariance is frequently used to validate a Kalman filter implementation and as a performance indicator. The ellipses drawn with {x(k), PN(k)} are called error ellipses. Similarly, inside a Monte Carlo simulation loop, the RMS errors can be calculated recursively as

Tnnnn kkkk

nk

nnk )]()(ˆ)][()(ˆ[1)(1)( 1 xxxxPP −−+−= −

(26)

)]()(ˆ[)]()(ˆ[1)(1)( 21

2 kkkkn

kRMSn

nkRMS nT

nnn xxxx −−+−= −

(27) It is easy to show that the estimation error covariance PN(k) is related to the sample covariance SN(k) and the bias vector bN(k) by T

NNNN kkkk )()()()( bbSP += (28) Without knowing the truth x(k), the bias vector cannot be pre-calculated using (20). As a result, the sample mean mN(k) and sample covariance SN(k) cannot indicate the estimation accuracy. With the truth x(k), the two methods can be linked to each other via (28), thus being equivalent.

6

Instead of calculating the sample covariance itself, the trace may be evaluated, leading to the root sample covariance trace (RSCT), a scalar quantity similar to the RMS errors, as )}({)( ktracekSCTR nn S= (29a) )(1)( 2

12 kRSCT

nnkRSCT nn −

−=

)]()(ˆ[)]()(ˆ[1 kkkk

n nnT

nn mxmx −−+

)]()([)]()([1

11 kkkkn

nnn

Tnn −− −−−+ mmmm (29b)

5. SIMULATION RESULTS AND ANALYSIS

Three simulation examples are presented in this section wherein a more general case with both range and range rate measurements are considered: 22 yx +=ρ (30a)

ρρ yyxx += (30b)

Taking partial derivatives of range and range rate measurements (30a) and (30b) with respect to xs and xt, retaining only the linear components, yields the observation matrices Hs and Ht, which assume the same form. Without the subscript, it is given by

⎥⎥⎥⎥

⎢⎢⎢⎢

∂∂

∂∂

∂∂

∂∂

∂∂

∂∂

∂∂

∂∂

=

yxyx

yxyxρρρρ

ρρρρ

H (31)

where

ρρ xx

=∂∂ (32a)

ρ

ρ yy

=∂∂ (32b)

0=

∂∂

xρ (32c)

0=

∂∂

yρ (32d)

3)(

ρρρ xyyxxxx

+−=∂∂ (32e)

3)(

ρρρ yyyxxyy

+−=∂∂ (32f)

ρ

ρ xx

=∂∂ (32g)

ρρ yy

=∂∂ (32h)

5.1 Simulation with Comparable Navigation Errors To analyze the effects of sensor location errors (navigation errors) on targeting accuracy, consider a sensor-target crossing scenario as shown in Figure 3. The sensor moves along the x-axis from (-10000 m, 0) at a constant speed of 100 m/s while the target moves along the y-axis from (0, 20000 m) at a constant speed of -12 m/s. The initial state is [ ]Ts yyxx=x [ ]Tm/s m m/s m 0010010000−= (33a) [ ]Tt yyxx=x [ ]Tm/s m m/s m 122000000 −= (33b) The sensor provides range and range rate measurements with the measurement error variances of 102 m2 and 0.12 (m/s)2, respectively. The sensor navigation errors are assumed to be comparable to the measurement errors with the variances of 102 m2 and 0.12 (m/s)2, respectively. This represents a rather extreme case to emphasize the effects. A regular extended Kalman filter (EKF) is implemented based on the following discrete-time second-order kinematic model (nearly constant velocity)

kkk

TT

TT

T

T

wxx

⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢

+

⎥⎥⎥⎥

⎢⎢⎢⎢

=+ 221

221

1

00

00

1000100

0010001

(34)

where the process noise w ~ N(0, Q) is a zero-mean Gaussian noise. The covariance matrix Q = diag([ 2

xσ 2yσ ])

uses the particular values of xσ = yσ = 0.1 m/s2 for both x

and y-axes in the first simulation. The initial state drawn from the initial estimation error covariance of 1002 m2 and 52 (m/s)2, respectively, for the position and velocity in the x and y-axes P0 = diag([1002 52 1002 52]) (35)

7

Under the same initial conditions, a Schmidt-EKF is implemented to calculate the predicted measurement covariance. The 100 Monte Carlo runs are used to calculate the root mean squared (RMS) errors. Figure 4 shows the details of the sample sensor trajectory with the blue-colored truth (dashed line) and the green-colored navigation solution (solid line). Figure 5 shows the details of the sample target trajectory, where the red-colored curve is the truth (dashed line), the cyan-colored curve is the regular Kalman filter estimate (with cross line style), and the purple-colored curve is the navigation solution (solid line). Figure 6 is another sample target trajectory, where the red-colored curve is the truth (dashed line), the cyan-colored curve is the regular Kalman filter estimate (with cross line style), and the purple-colored curve is the navigation solution (solid line). Figure 7 shows the RMS errors and standard deviations for the position estimate produced by the regular EKF and consider EKF (the term consider EKF or consider filter is used interchangeably with Schmidt-EKF), respectively. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve) whereas the standard deviations of the regular EKF (the dashed blue curve) are smaller than those of the consider EKF (the red curve with cross marks ×). This indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF. Additionally, the consider filter provides a better performance bound than the optimistic regular EKF. Figure 8 shows the RMS errors and standard deviations for the velocity estimate produced by the regular and consider EKF, respectively. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve). The standard deviations of the regular EKF (the dashed blue curve) and those of the consider EKF (the red curve with cross marks ×) are comparable. Again, this indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF and the consider filter provides a better performance bound than the optimistic regular EKF. The discrepancy between the RMS errors and standard deviations of the consider EKF as shown in Figures 7 and 8 may be explained by the fact that the standard deviations are calculated from the propagated and updated covariances with a rather large process noise covariance Q whereas both the target and sensor are moving at constant speed.

5.2 Simulation with Different Process Noise In the next simulation, the process noise covariance Q is reduced to 0.0012 (m/s2)2. The sensor navigation errors are also reduced to 52 m2 and 0.052 (m/s)2 (twice better). Compared to Figures 7 and 8, the errors in Figures 9 and 10 are all smaller due to the use of smaller Q and Ps. More importantly, the standard deviations calculated from the propagated and updated covariances of the consider EKF (CEKF) now match well with the actual RMS errors. The CEKF-to-RMS error match indicates the consider covariance can better predict the targeting performance for target tracking, sensor fusion, and resource management. By comparing Figures 9 and 10, one would ask why the RMS position errors of regular EKF and consider EKF as in Figure 9 differ that much - whereas why their RMS velocity errors fairly agree as in Figure 10. Note that in the simulation, navigation errors are introduced only in the sensor position not in the sensor velocity and the sensor position errors are translated directly into the target position errors. Typically the velocity components of an integrated navigation solution are more accurate than the position counterparts. But significant sensor velocity errors could be added in the simulation. Furthermore, the measurements used by both filters include both range and range rate as in (30). The range rate measurements, only subject to random noise, help produce relatively good velocity estimates shown in Figure 10. In the past, heuristic approaches have been used to adjust Q and R of a Kalman filter to prevent divergence due to un-estimated error terms (which are totally ignored) and to match the filter’s covariance matrix with data-driven RMS. Such “blind” adjustments do not explore the inherent structures of actual errors. In contrast, even not estimating the error terms, the consider filter explicitly takes into account the distribution and structure of the errors, thus providing better results. 5.3 Evaluation with Unknown Truth In the following simulation, the sample mean and covariance are calculated by the Monte Carlo simulation without knowing the target state truth. The estimated-mean centered RMS is calculated, which is the root sample covariance trace (RSCT) defined in Section 4. Note that the estimated-mean centered RMS has a different meaning than the truth-centered RMS. It does not indicate how close the estimates are to the truth but how dispersed the estimates are. It may reveal how consistent the estimates are but the estimates may be biased. So two aspects need to be shown, one is the bias and the other is spread around the bias. If the target state estimate is unbiased, the sample estimates are close to the RMS errors values.

8

-12000 -10000 -8000 -6000 -4000 -2000 0 2000 4000 6000-0.5

0

0.5

1

1.5

2

2.5x 10

4

x, m

y, m

sensor trajectory

target trajectory

sensor position truth

sensor position estimate

target position truthregular KF estimate

CKF estimate

Fig. 3 Sensor-Target Crossing Scenario

-8000 -6000 -4000 -2000 0 2000 4000

-30

-20

-10

0

10

20

30

x, m

y, m

sensor position truth

sensor position estimate

target position truthregular KF estimate

CKF estimate

Fig. 4 Sample Sensor Trajectory

-140 -120 -100 -80 -60 -40 -20 0 20 40 60

1.82

1.84

1.86

1.88

1.9

1.92

1.94

1.96

1.98

2

x 104

x, m

y, m

sensor position truth

sensor position estimate

target position truthregular KF estimate

CKF estimate

Fig. 5 Sample Target Trajectory (Run 1)

-900 -800 -700 -600 -500 -400 -300 -200 -100 0

1.82

1.84

1.86

1.88

1.9

1.92

1.94

1.96

1.98

2

2.02

x 104

x, m

y, m

sensor position truth

sensor position estimate

target position truthregular KF estimate

CKF estimate

Fig. 6 Sample Target Trajectory (Run 2)

0 50 100 15040

45

50

55

60

65

70

75

80

85

90

dt = 2 s

posi

tion

erro

rs p

er a

xis,

m

regular vs. consider EKF with navigation errors

P0 ~ 100, 5

Q ~ 0.1, 0.1

R ~ 10, 0.1

Ps ~ 10, 0.1

mc = 100

std, regular EKF

rms, regular EKFstd, consider EKF

rms, consider EKF

Fig. 7 Comparison of Position Errors

0 50 100 1500

0.5

1

1.5

2

2.5

3

3.5

4

dt = 2 s

velo

city

err

ors

per

axis

, m

/s

regular vs. consider EKF with navigation errors

P0 ~ 100, 5

Q ~ 0.1, 0.1

R ~ 10, 0.1

Ps ~ 10, 0.1

mc = 100

std, regular EKF

rms, regular EKFstd, consider EKF

rms, consider EKF

Fig. 8 Comparison of Velocity Errors

9

0 50 100 15040

45

50

55

60

65

70

75

dt = 2 s

posi

tion

erro

rs p

er a

xis,

m

regular vs. consider EKF with navigation errors

P0 ~ 100, 5

Q ~ 0.001, 0.001

R ~ 10, 0.1

Ps ~ 5, 0.05

mc = 50

std, regular EKF

rms, regular EKFstd, consider EKF

rms, consider EKF

Fig. 9 Position Errors with Small Q

0 50 100 1500

0.5

1

1.5

2

2.5

3

3.5

4

dt = 2 s

velo

city

err

ors

per

axis

, m

/s

regular vs. consider EKF with navigation errors

P0 ~ 100, 5

Q ~ 0.001, 0.001

R ~ 10, 0.1

Ps ~ 5, 0.05

mc = 50

std, regular EKF

rms, regular EKFstd, consider EKF

rms, consider EKF

Fig. 10 Velocity Errors with Small Q

0 50 100 15040

45

50

55

60

65

70

75

dt = 2 s

posi

tion

erro

rs p

er a

xis,

m

regular vs. consider EKF with navigation errors

std, regular EKF

rms, regular EKFstd, consider EKF

rms, consider EKF

Fig. 11 Truth-Centered Position Errors

0 50 100 1500

0.5

1

1.5

2

2.5

3

3.5

4

dt = 2 s

velo

city

err

ors

per

axis

, m

/s

regular vs. consider EKF with navigation errors

std, regular EKF

rms, regular EKFstd, consider EKF

rms, consider EKF

Fig. 12 Truth-Centered Velocity Errors

Figure 11 shows the RMS errors and standard deviations for the position estimate produced by the regular and consider EKF, respectively. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve) whereas the standard deviations of the regular EKF (the dashed blue curve) are smaller than those of the consider EKF (the red curve with cross marks ×). This indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF. Besides, the consider filter provides a better performance bound than the regular EKF, which is too optimistic. Figure 12 shows the RMS errors and standard deviations for the velocity estimate produced by the regular and consider EKF, respectively. The RMS errors of the regular EKF (the green curve with plus marks +) are larger than those of the consider EKF (the solid cyan curve). The standard deviations of the regular EKF (the dashed blue curve) and

those of the consider EKF (the red curve with cross marks ×) are comparable. Again, this indicates that the actual estimates of the consider EKF are smaller (better) than those of the regular EKF and the consider filter provides a better performance bound than the optimistic regular EKF. Figure 13 shows the position biases (i.e., the difference between the sample mean and the state truth) of the regular Kalman filter and consider Kalman filter. The CKF biases which are comparable, but slightly better than the regular Kalman filter biases. Figure 14 shows the velocity biases (i.e., the difference between the sample mean and the state truth) of the regular Kalman filter and consider Kalman filter, which are comparable with the consider Kalman filter. Figure 15 shows the position errors in terms of the sample mean-centered RMS error values or the root sample

10

0 50 100 1500

2

4

6

8

10

12

14

dt = 2 s

posi

tion

erro

rs p

er a

xis,

m

regular vs. consider EKF with navigation errors, mc = 100

bias, regular EKF

bias, consider EKF

Fig. 13 Position Bias

0 50 100 1500

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

dt = 2 s

velo

city

err

ors

per

axis

, m

/s

regular vs. consider EKF with navigation errors, mc = 100

bias, regular EKF

bias, consider EKF

Fig. 14 Velocity Bias

0 50 100 15040

45

50

55

60

65

70

dt = 2 s

posi

tion

erro

rs p

er a

xis,

m

regular vs. consider EKF with navigation errors, mc = 100

sample std, regular EKF

sample std, consider EKF

Fig. 15 Sample Mean-Centered Position Errors

0 50 100 1500

0.5

1

1.5

2

2.5

3

3.5

dt = 2 s

velo

city

err

ors

per

axis

, m

/s

regular vs. consider EKF with navigation errors, mc = 100

sample std, regular EKF

sample std, consider EKF

Fig. 16 Sample Mean-Centered Velocity Errors

0 10 20 30 40 50 60 70 803

4

5

6

7

8

9

10

dt = 2 s

rang

e m

easu

rem

ent

erro

rs,

m

regular vs. consider EKF with navigation errors

std, regular EKF

std, consider EKF

Fig. 17 Predicted Range Errors (CEKF anticipates navigation errors whereas REKF is optimistic)

0 10 20 30 40 50 60 70 800.3

0.32

0.34

0.36

0.38

0.4

0.42

0.44

0.46

0.48

dt = 2 s

rang

e ra

te m

easu

rem

ent

erro

rs,

m/s

regular vs. consider EKF with navigation errors

std, regular EKF

std, consider EKF

Fig. 18 Predicted Range Rate Errors (CEKF anticipates navigation errors whereas REKF is optimistic)

11

covariance trace (RSCT) of the regular Kalman filter (RKF) and consider or Schmidt Kalman filter (CKF). From Figure 15, it is clear that the CKF is much better than the RKF. Because of small position bias as shown in Figure 13, the sample mean-centered RMS or the root sample covariance trace (RSCT) of Figure 15 is expected as the truth-centered RMS as shown in Figure 11. This is indeed true. Similarly, Figure 16 shows the velocity errors in terms of the sample mean-centered RMS error values or the root sample covariance trace (RSCT) of the regular Kalman filter and consider Kalman filter. The consider Kalman filter outperforms the regular Kalman filter. Because of small position bias as shown in Figure 12, the sample mean-centered RMS or the root sample covariance trace (RSCT) of Figure 14 is very close to the truth-centered RMS as shown in Figure 10. Finally, comparison of (12) (and (17b)) to (2e) indicates an extra time-varying term, which is a function of the underlying state navigation error Ps and the observation matrix Hs (and extra time-varying terms, which are function of the cross correlation ts

kP and observation matrices Ht and Hs). Note that Hs is geometry-dependent and so is Ht. Figures 17 and 18 show the two diagonal elements of the predicted measurement error covariance (squared roots). The regular EKF predicted measurement errors are small. It cannot describe large errors due to the navigation errors, thus being optimistic. In contrast, the consider EKF faithfully produces large errors because it anticipates the navigation errors. The net effect of incorporating navigation errors in the tracking filter is a time-varying adaptive adjustment of the measurement error covariance.

6. CONCLUSIONS In this paper, we investigated the effect of sensor location errors (navigation errors) on target tracking performance. Without accounting for such errors, a regular Kalman filter produces optimistic results. In contrast, the Schmidt-Kalman filter (also known as the consider Kalman filter), which incorporates the navigation error covariance, was shown not only to be more consistent but also to produce smaller estimation errors. One aspect of our ongoing work is to extend the formulation to maneuvering targets using such algorithms as the interacting multiple model (IMM) estimator. Another effort is to apply the consider covariance for track-to-track fusion. Yet another effort is to develop a track bias calibration and removal algorithm. Finally, we will investigate the proactive approach to sensor management with consider covariance in the presence of navigation errors [13].

ACKNOWLEDGEMENT Research supported in part under Contract No. FA8650-08-C-1407, which is greatly appreciated.

REFERENCES [1] Y. Bar-Shalom and X.R. Li, Multitarget-Multisensor

Tracking: Principles and Techniques, YBS Publishing, Storrs, CT, 1995.

[2] S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems, Artech House, Boston, 1999.

[3] A. Farina and F.A. Studer, Radar Data Processing (Vol. I & II), Wiley, New York, 1985.

[4] P. Maybeck, Stochastic Models, Estimation, and Control, Volume 1, Academic Press, Inc, 1979.

[5] O. Montenbruck and E. Gill, “Section 8.1.4: Consider Parameters” in Satellite Orbits: Models, Methods, and Applications, Springer, 2000.

[6] R.Y. Novoselov, S.M. Herman, S.M. Gadaleta, and A.B. Poore, “Mitigating The Effects of Residual Biases with Schmidt-Kalman Filtering,” Proc. of Fusion’2005. July 2005.

[7] R. Paffenroth, R. Novoselov, S. Danford, M. Teixeira, S. Chan, and A. Poore, “Mitigation of Biases Using the Schmidt-Kalman Filter,” Proc. SPIE, Vol. 6699, 2007.

[8] L.R. Paradowski, “Uncertainty Elllipses and Their Application to Interval Estimation of Emitter Position,” IEEE Trans. on Aerospace and Electronic Systems, 33(1), January 1997, 126-133.

[9] S. Schmidt, “Applications of State-Space Methods to Navigation Problems,” in Advances in Control Systems, Vol. 6, C. Leondes (Ed.), 293-340, Academy Press, New York, 1966.

[10] B.D. Tapley, B.E. Schutz, and G.H. Born, “Chapter 6: Consider Covariance Analysis” in Statistical Orbit Determination, Elsevier, Burlington, MA, 2004.

[11] C. Yang, Performance Monitoring and Prediction for Active Management of Distributed Sensors Fusion in Target Tracking, Technical Report (FA8650-08-C-1407), November 2008.

[12] C. Yang, E. Blasch, and I. Kadar, “Geometric Factor in Target Positioning and Tracking,” Fusion’2009, Seattle, WA, July 2009.

[13] C. Yang, M. Miller, E. Blasch, and T. Nguyen, “Proactive Radio Navigation and Target Tracking,” ION-GNSS’2009, Savannah, GA, September 2009.

12

BIOGRAPHY

Chun Yang received his Bachelor of Engineering from Northeastern University, Shenyang, China, in 1984 and his title of Docteur en Science from Université de Paris, Orsay, France, in 1989. After two years of postdoctoral research at University of Connecticut, Storrs, CT, he moved on with his

industrial R&D career. Since 1994, he has been with Sigtem Technology, Inc. and has been working on adaptive array and baseband signal processing for GNSS receivers and radar systems as well as on nonlinear state estimation with applications in target tracking, integrated inertial navigation, and information fusion. Dr. Yang is an Adjunct Professor of Electrical and Computer Engineering at Miami University. He is the member of the ION, IEEE, ISIF, and SPIE.

Erik Blasch is a Fusion Evaluation Tech Lead for the Assessment & Integration Branch, Sensors Directorate, Air Force Research Laboratory, WPAFB, OH. He is an adjunct professor at Wright State University

(WSU)/ University of Dayton /AFIT and a Reserve Major at the Air Force Office of Scientific Research (AFOSR). He received his Ph.D. in Electrical Engineering from WSU in 1999, a MSEE from WSU in 1997, a MS in Mech. Eng (1994) and MS in Industrial Eng. (1995) from Georgia Tech, and a BSME from MIT in 1992 among other advanced degrees in engineering, health science, economics, and business administration. He was the President of the International Society of Information Fusion (ISIF), SPIE Fellow, and active in IEEE, AIAA, and ION. His research interests include target tracking, sensor and information fusion, automatic target recognition, biologically-inspired robotics, and controls.

Philip Lavern Douville graduated from the University of Detroit with a B.E.E. in 1972, the Air Force Institute of Technology with an M.S.E.E. in 1981, and the University of Dayton with a Ph.D. in Electrical Engineering in 1996. At the Air Force Research Laboratory (AFRL), he is currently

working in the Sensors ATR Technology Division (RYA) of the Sensors Directorate (RY).


Recommended