+ All Categories
Home > Documents > Skf Final Documentation

Skf Final Documentation

Date post: 05-Apr-2018
Category:
Upload: deepkar-reddy
View: 230 times
Download: 0 times
Share this document with a friend

of 80

Transcript
  • 8/2/2019 Skf Final Documentation

    1/80

    Chapter-1

    1. Introduction

    We humans have been filtering things for virtually our entire history. Water

    filtering is a simple example. We can filter impurities from water as simply as using our

    hands to skim dirt and leaves off the top of the water. Another example is filtering out

    noise from our surroundings. If we paid attention to all the little noises around us we would

    go crazy. We learn to ignore superfluous sounds (traffic, appliances, etc.) and focus on

    important sounds, like the voice of the person we're speaking with.

    There are also many examples in engineering where filtering is desirable. Radio

    communications signals are often corrupted with noise. A good filtering algorithm can

    remove the noise from electromagnetic signals while still retaining the useful information.

    Another example is voltages. Many countries require in-home filtering of line voltages in

    order to power personal computers and peripherals. Without filtering, the power

    fluctuations would drastically shorten the useful lifespan of the devices.

    1.1 Literature Survey

    Kalman filtering is a relatively recent (1960) development in filtering, although it

    has its roots as far back as Gauss (1795). Kalman filtering has been applied in areas as

    diverse as aerospace, marine navigation, nuclear power plant instrumentation, demographic

    modeling, manufacturing, and many others. In the field of motion estimation for video

    coding many techniques have been applied. It is now quite common to see the Kalman

    filtering technique and some of its extensions to be used for the estimation of motion

    within image sequences. Particularly in the pixel-recursive approaches, which suit verymuch the Kalman formulation, one finds various ways of applying this estimation

    technique both in the time and frequency domains. On a very general perspective, we find

    use of Kalman filter (KF), which implies linear state-space representations and the

    extended Kalman filter (EKF), which uses the linearized expressions of non-linear state-

    1

  • 8/2/2019 Skf Final Documentation

    2/80

    space formulations. Moreover, the parallel extended Kalman filter (PEKF) which consists

    of a parallel bank of EKFs, is often encountered in practice.

    Navigation errors are a performance-limiting factor for target tracking as ranging

    measurements increase in accuracy. In the past, a sensor platform is typically assumed to

    be at a perfectly known location when formulating the measurement equations for tracking

    filters. Such an assumption is reasonable when the tracking goal is to determine the relative

    location of a target for such applications as a homing missile. However, for other

    applications such as sensor data fusion or weapons cueing, the sensor/platform location

    accuracy is critical for high speed maneuvering target tracking as the targets relative

    location has to be transferred into a global reference frame and the sensor navigation errors

    get magnified into the target position solution. For tracking mission success, it requires theuse of such techniques as the Schmidt-Kalman Filter (SKF) to account for navigation

    errors. Without accounting for navigation errors, regular Kalman filters tend to be

    optimistic in that the covariance matrix reaches a steady-state value (if not diverged) far

    below the actual mean squared errors. Heuristic techniques such as the-- filter are used to

    prevent divergence and boost filter gain by adjusting Q and R covariance parameters. In it

    was shown that residual biases in radar measurements could be mitigated using the

    Schmidt- Kalman filter. In the setting of tracking ballistic missiles in midcourse, ground-

    based electronically scanned array radars (ESA) were considered.

    The range and off-boresight angles of a target measured in the radar frame are

    transformed into relative position measurements in a local east-north-up (ENU) frame. It

    was assumed that the measurement process is subject to frame transformation errors (in

    terms of Euler angles) and unknown but constant measurement errors (in terms of biases

    and scale factors) in addition to random noise. It was further assumed that a bias estimator

    was available to remove the angular and measurement biases. Further, zero-mean Gaussian

    residual biases could be mitigated so long as the residual bias covariance matrix was

    provided by the bias estimator.

    2

  • 8/2/2019 Skf Final Documentation

    3/80

    The Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation errors.

    The Schmidt Kalman filter does not estimate the navigation errors explicitly (it could if

    external reference data were available) but rather takes into account (i.e., consider) the

    navigation error covariance, provided by an on-board navigation unit, into the tracking

    filter formulation. By exploring the structural navigation errors, the SKF are not only more

    consistent but also produces smaller mean squared errors than regular Kalman filters.

    Chapter-2

    Adaptive Filter

    2.0 Introduction

    3

  • 8/2/2019 Skf Final Documentation

    4/80

    An adaptive filter is a filter that self-adjusts its transfer function according to an

    optimization algorithm driven by an error signal. Because of the complexity of the

    optimization algorithms, most adaptive filters are digital filters. By way of contrast, a non-

    adaptive filter has a static transfer function. [16]

    Adaptive filters are required for some applications because some parameters of the

    desired processing operation (for instance, the locations of reflective surfaces in a

    reverberant space) are not known in advance. The adaptive filter uses feedback in the form

    of an error signal to refine its transfer function to match the changing parameters.

    Generally speaking, the adaptive process involves the use of a cost function, which

    is a criterion for optimum performance of the filter, to feed an algorithm, which determines

    how to modify filter transfer function to minimize the cost on the next iteration .[16]

    As the power of digital signal processors has increased, adaptive filters have

    become much more common and are now routinely used in devices such as mobile phones

    and other communication devices, camcorders and digital cameras, and medical monitoring

    equipment.[16]

    2.1 Block diagram

    Fig 2.1Block diagram of adaptive filter

    To start the discussion of the block diagram we take the following assumptions:

    The input signal is the sum of a desired signal d(n) and interfering noise v(n)

    x(n) = d(n) + v(n) (2.1)

    4

    http://en.wikipedia.org/wiki/Transfer_functionhttp://en.wikipedia.org/wiki/Digital_filterhttp://en.wikipedia.org/wiki/Reverberanthttp://en.wikipedia.org/wiki/Cost_functionhttp://en.wikipedia.org/wiki/Digital_signal_processorhttp://en.wikipedia.org/wiki/Transfer_functionhttp://en.wikipedia.org/wiki/Digital_filterhttp://en.wikipedia.org/wiki/Reverberanthttp://en.wikipedia.org/wiki/Cost_functionhttp://en.wikipedia.org/wiki/Digital_signal_processor
  • 8/2/2019 Skf Final Documentation

    5/80

    The variable filter has a Finite Impulse Response (FIR) structure. For such

    structures the impulse response is equal to the filter coefficients. The coefficients

    for a filter of orderp are defined as[17]

    . (2.2)

    The error signal or cost function is the difference between the desired and the

    estimated signal

    (2.3)

    The variable filter estimates the desired signal by convolving the input signal with

    the impulse response. In vector notation this is expressed as [17]

    (2.4)

    where

    (2.5)

    is an input signal vector. Moreover, the variable filter updates the filter coefficients at every

    time instant

    (2.6)

    where is a correction factor for the filter coefficients. The adaptive algorithm

    generates this correction factor based on the input and error signals. LMS and RLS define

    two different coefficient update algorithms.

    [17]

    2.2 Applications of Adaptive Filters

    2.2.1 System Identification

    5

    http://en.wikipedia.org/wiki/Finite_impulse_responsehttp://en.wikipedia.org/wiki/Cost_functionhttp://en.wikipedia.org/wiki/Finite_impulse_responsehttp://en.wikipedia.org/wiki/Cost_function
  • 8/2/2019 Skf Final Documentation

    6/80

    One common adaptive filter application is to use adaptive filters to identify an

    unknown system, such as the response of an unknown communications channel or the

    frequency response of an auditorium, to pick fairly divergent applications. Other

    applications include echo cancellation and channel identification.[17]

    Fig: 2.2 System Identification Model

    Clearly, when e(k) is very small, the adaptive filter response is close to the response

    of the unknown system. In this case the same input feeds both the adaptive filter and the

    unknown. If, for example, the unknown system is a modem, the input often represents

    white noise, and is a part of the sound you hear from your modem when you log in to your

    Internet service provider.[18]

    2.2.2 Inverse System Identification

    By placing the unknown system in series with your adaptive filter, your filter adapts

    to become the inverse of the unknown system as e(k) becomes very small. As shown in the

    6

  • 8/2/2019 Skf Final Documentation

    7/80

    figure the process requires a delay inserted in the desired signal d(k) path to keep the data

    at the summation synchronized. Adding the delay keeps the system causal.

    Fig 2.3 Determining an Inverse Response to an Unknown System

    Including the delay to account for the delay caused by the unknown system prevents this

    condition.[18]

    Plain old telephone systems (POTS) commonly use inverse system identification to

    compensate for the copper transmission medium. When you send data or voice over

    telephone lines, the copper wires behave like a filter, having a response that rolls off athigher frequencies (or data rates) and having other anomalies as well.

    Adding an adaptive filter that has a response which is the inverse of the wire

    response, and configuring the filter to adapt in real time, lets the filter compensate for the

    rolloff and anomalies, increasing the available frequency output range and data rate for the

    telephone system.[18]

    2.2.3Noise or Interference Cancellation

    In noise cancellation, adaptive filters let you remove noise from a signal in real

    time. Here, the desired signal, the one to clean up, combines noise and desired information.

    7

  • 8/2/2019 Skf Final Documentation

    8/80

    To remove the noise, feed a signal n'(k) to the adaptive filter that represents noise that is

    correlated to the noise to remove from the desired signal. [19]

    Fig 2.4 Using an Adaptive Filter to Remove Noise from an Unknown System

    So long as the input noise to the filter remains correlated to the unwanted noise

    accompanying the desired signal, the adaptive filter adjusts its coefficients to reduce the

    value of the difference between y(k) and d(k), removing the noise and resulting in a clean

    signal in e(k). Notice that in this application, the error signal actually converges to the input

    data signal, rather than converging to zero.[18]

    2.2.4 Signal Prediction

    Predicting signals requires that you make some key assumptions. Assume that the

    signal is either steady or slowly varying over time, and periodic over time as well.

    8

  • 8/2/2019 Skf Final Documentation

    9/80

    Fig 2.5 Predicting Future Values of a Periodic Signal

    Accepting these assumptions, the adaptive filter must predict the future values ofthe desired signal based on past values. Whens(k) is periodic and the filter is long enough

    to remember previous values, this structure with the delay in the input signal, can perform

    the prediction. You might use this structure to remove a periodic signal from stochastic

    noise signals. [19]

    2.3Conclusion

    Finally, it can be noticed that most systems of interest contain elements of more

    than one of the four adaptive filter structures. Carefully reviewing the real structure may be

    required to determine what the adaptive filter is adapting to. Also, for clarity in the figures,

    the analog-to-digital (A/D) and digital-to-analog (D/A) components do not appear. Since

    the adaptive filters are assumed to be digital in nature and many of the problems producing

    analog data, converting the input signals to and from the analog domain is probably

    necessary. These models are used in the design of adaptive filters and those are further used

    in various applications of advanced communication.

    9

  • 8/2/2019 Skf Final Documentation

    10/80

    Chapter-3

    Kalman Filter

    3.0 Introduction

    In statistics, the Kalman filter is a mathematical method named after Rudolf E.

    Kalman. Its purpose is to use measurements observed over time, containing noise (random

    variations) and other inaccuracies, and produce values that tend to be closer to the true

    values of the measurements and their associated calculated values. The Kalman filter has

    many applications in technology, and is an essential part of space and military technology

    development. A very simple example and perhaps the most commonly used type of

    Kalman filter is the phase-locked loop, which is now ubiquitous in FM radios and most

    electronic communications equipment. Extensions and generalizations to the method have

    also been developed.[14]

    The Kalman filter produces estimates of the true values of measurements and their

    associated calculated values by predicting a value, estimating the uncertainty of the

    predicted value, and computing a weighted average of the predicted value and the

    measured value. The most weight is given to the value with the least uncertainty. The

    estimates produced by the method tend to be closer to the true values than the originalmeasurements because the weighted average has a better estimated uncertainty than either

    of the values that went into the weighted average. [14]

    From a theoretical standpoint, the Kalman filter is an algorithm for efficiently doing

    exact inference in a linear dynamical system, which is a Bayesian model similar to a hidden

    Markov model but where the state space of the latent variables is continuous and where all

    latent and observed variables have a Gaussian distribution (often a multivariate Gaussian

    distribution).

    10

    http://en.wikipedia.org/wiki/Statisticshttp://en.wikipedia.org/wiki/Rudolf_E._Kalmanhttp://en.wikipedia.org/wiki/Rudolf_E._Kalmanhttp://en.wikipedia.org/wiki/Noisehttp://en.wikipedia.org/wiki/Kalman_filter#Applicationshttp://en.wikipedia.org/wiki/Phase-locked_loophttp://en.wikipedia.org/wiki/Weighted_meanhttp://en.wikipedia.org/wiki/Linear_dynamical_systemhttp://en.wikipedia.org/wiki/Bayesian_modelhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Latent_variablehttp://en.wikipedia.org/wiki/Gaussian_distributionhttp://en.wikipedia.org/wiki/Multivariate_Gaussian_distributionhttp://en.wikipedia.org/wiki/Multivariate_Gaussian_distributionhttp://en.wikipedia.org/wiki/Statisticshttp://en.wikipedia.org/wiki/Rudolf_E._Kalmanhttp://en.wikipedia.org/wiki/Rudolf_E._Kalmanhttp://en.wikipedia.org/wiki/Noisehttp://en.wikipedia.org/wiki/Kalman_filter#Applicationshttp://en.wikipedia.org/wiki/Phase-locked_loophttp://en.wikipedia.org/wiki/Weighted_meanhttp://en.wikipedia.org/wiki/Linear_dynamical_systemhttp://en.wikipedia.org/wiki/Bayesian_modelhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Hidden_Markov_modelhttp://en.wikipedia.org/wiki/Latent_variablehttp://en.wikipedia.org/wiki/Gaussian_distributionhttp://en.wikipedia.org/wiki/Multivariate_Gaussian_distributionhttp://en.wikipedia.org/wiki/Multivariate_Gaussian_distribution
  • 8/2/2019 Skf Final Documentation

    11/80

    3.1 Advantages of Kalman Filter

    Below are some advantages of the Kalman filter, comparing with another famous filter

    known as the Wiener Filter. The information below is obtained from.[20]

    1. The Kalman filter algorithm is implementable on a digital computer, which this

    was replaced by analog circuitry for estimation and control when Kalman filter was

    first introduced. This implementation may be slower compared to analog filters of

    Wiener, however it is capable of much greater accuracy.

    2. Stationary properties of the Kalman filter are not required for the deterministic

    dynamics or random processes. Many applications of importance include no

    stationary stochastic processes.

    3. The Kalman filter is compatible with state-space formulation of optimal controllers

    for dynamic systems. It proves useful towards the 2 properties of estimation and

    control for these systems.

    4. The Kalman filter requires less additional mathematical preparation to learn for the

    modern control engineering student, compared to the Wiener filter.

    5. Necessary information for mathematically sound, statistically-based decision

    methods for detecting and rejecting anomalous measurements are provided through

    the use of Kalman filter.

    11

  • 8/2/2019 Skf Final Documentation

    12/80

    3.2 The Filtering Problem

    This section formulates the general filtering problem and explains the conditions

    under which the general filter simplifies to a Kalman filter (KF).

    Figure 3.1: Typical application of the Kalman Filter

    Figure 3.1, reproduced from illustrates the application context in which the Kalman

    Filter is used. A physical system, (e.g., a mobile robot, a chemical process, a satellite) is

    driven by a set of external inputs or controls and its outputs are evaluated by measuring

    devices or sensors, such that the knowledge on the systems behavior is solely given by the

    inputs and the observed outputs. The observations convey the errors and uncertainties in

    the process, namely the sensor noise and the system errors. [21]

    Based on the available information (control inputs and observations) it is required

    to obtain an estimate of the systems state that optimizes a given criteria. This is the role

    played by a filter. In particular situations, explained in the following sections, this filter is a

    Kalman Filter.[21]

    12

  • 8/2/2019 Skf Final Documentation

    13/80

    3.3 Overview of the calculation

    The Kalman filter uses a system's dynamics model (i.e., physical laws of motion),

    known control inputs to that system, and measurements (such as from sensors) to form an

    estimate of the system's varying quantities (its state) that is better than the estimate

    obtained by using any one measurement alone. As such, it is a common sensor fusion

    algorithm.[15]

    All measurements and calculations based on models are estimates to some degree.

    Noisy sensor data, approximations in the equations that describe how a system changes,

    and external factors that are not accounted for introduce some uncertainty about the

    inferred values for a system's state. The Kalman filter averages a prediction of a system's

    state with a new measurement using a weighted average. The purpose of the weights is that

    values with better (i.e., smaller) estimated uncertainty are "trusted" more. [16]

    The weights are calculated from the covariance, a measure of the estimated

    uncertainty of the prediction of the system's state. The result of the weighted average is a

    new state estimate that lies in between the predicted and measured state, and has a better

    estimated uncertainty than either alone. This process is repeated every time step, with the

    new estimate and its covariance informing the prediction used in the following iteration.

    This means that the Kalman filter works recursively and requires only the last "best guess"

    - not the entire history - of a system's state to calculate a new state.[16]

    When performing the actual calculations for the filter (as discussed below), the

    state estimate and covariances are coded into matrices to handle the multiple dimensions

    involved in a single set of calculations. This allows for representation of linear

    relationships between different state variables (such as position, velocity, and acceleration)

    in any of the transition models or covariances. [16]

    3.4 Kalman filter in computer vision

    Data fusion using a Kalman filter can assist computers to track objects in videos

    with low latency (not to be confused with a low number of latent variables). The tracking

    13

    http://en.wikipedia.org/wiki/State_space_(controls)http://en.wikipedia.org/wiki/Sensor_fusionhttp://en.wikipedia.org/wiki/Weighted_meanhttp://en.wikipedia.org/wiki/Covariancehttp://en.wikipedia.org/wiki/Recursive_filterhttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Data_fusionhttp://en.wikipedia.org/wiki/State_space_(controls)http://en.wikipedia.org/wiki/Sensor_fusionhttp://en.wikipedia.org/wiki/Weighted_meanhttp://en.wikipedia.org/wiki/Covariancehttp://en.wikipedia.org/wiki/Recursive_filterhttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Data_fusion
  • 8/2/2019 Skf Final Documentation

    14/80

    of objects is a dynamic problem, using data from sensor and camera images that always

    suffer from noise. This can sometimes be reduced by using higher quality cameras and

    sensors but can never be eliminated, so it is often desirable to use a noise reduction method.

    The iterative predictor-corrector nature of the Kalman filter can be helpful, because

    at each time instance only one constraint on the state variable need be considered. This

    process is repeated, considering a different constraint at every time instance. All the

    measured data are accumulated over time and help in predicting the state. Video can also

    be pre-processed, perhaps using a segmentation technique, to reduce the computation and

    hence latency.[20]

    3.5 Underlying Dynamic System Model

    Kalman filters are based on linear dynamic systems discretized in the time domain.

    They are modelled on a Markov chain built on linear operators perturbed by Gaussian

    noise. The state of the system is represented as a vectorofreal numbers. At each discrete

    time increment, a linear operator is applied to the state to generate the new state, with some

    noise mixed in, and optionally some information from the controls on the system if they are

    known. Then, another linear operator mixed with more noise generates the observed

    outputs from the true ("hidden") state. The Kalman filter may be regarded as analogous to

    the hidden Markov model, with the key difference that the hidden state variables take

    values in a continuous space (as opposed to a discrete state space as in the hidden Markov

    model). Additionally, the hidden Markov model can represent an arbitrary distribution for

    the next value of the state variables, in contrast to the Gaussian noise model that is used for

    the Kalman filter. There is a strong duality between the equations of the Kalman Filter and

    those of the hidden Markov model. [20]

    In order to use the Kalman filter to estimate the internal state of a process given

    only a sequence of noisy observations, one must model the process in accordance with the

    framework of the Kalman filter. This means specifying the following matrices: Fk, the

    state-transition model; Hk, the observation model; Qk, the covariance of the process noise;

    14

    http://en.wikipedia.org/wiki/Linear_dynamic_systemhttp://en.wikipedia.org/wiki/Markov_chainhttp://en.wikipedia.org/wiki/Linear_operatorhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Noise_(physics)http://en.wikipedia.org/wiki/State_space_(controls)http://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Discrete_timehttp://en.wikipedia.org/wiki/Discrete_timehttp://en.wikipedia.org/wiki/Duality_(mathematics)http://en.wikipedia.org/wiki/Linear_dynamic_systemhttp://en.wikipedia.org/wiki/Markov_chainhttp://en.wikipedia.org/wiki/Linear_operatorhttp://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Noise_(physics)http://en.wikipedia.org/wiki/State_space_(controls)http://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Discrete_timehttp://en.wikipedia.org/wiki/Discrete_timehttp://en.wikipedia.org/wiki/Duality_(mathematics)
  • 8/2/2019 Skf Final Documentation

    15/80

    Rk, the covariance of the observation noise; and sometimes Bk, the control-input model, for

    each time-step, k, as described below.

    Fig 3.2 The Kalman filter model assumes the true state at time kis evolved from

    the state at (k 1)

    3.6 The Process to be estimated

    The Kalman filter addresses the general problem of trying to estimate the state

    nx of a discrete-time controlled process that is governed by the linear stochastic

    difference equation[22]

    Where

    1kk1kk wBuAxx ++= (3.1)

    with a measurementn

    x that is

    kkk vHxz += (3.2)

    15

  • 8/2/2019 Skf Final Documentation

    16/80

    The random variableskw and kv represent the process and measurement noise

    (respectively). They are assumed to be independent (of each other), white, and with normal

    probability distributions

    ,Q)N(0,p(w) (3.3)

    .R)N(0,p(v) (3.4)

    In practice, the process noise covariance Q and measurement noise covariance R matrices

    might change with each time step or measurement, however here we assume they are

    constant.

    The nn matrix A in the difference equation (1.1) relates the state at the previous timestep k-1 to the state at the current step k, in the absence of either a driving function or

    process noise. Note that in practice A might change with each time step, but here we

    assume it is constant. The ln matrixB relates the optional control input nu to the

    state x. The nm matrix H in the measurement equation (1.2) relates the state to the

    measurementkz . In practice H might change with each time step or measurement, but

    here we assume it is constant.[22]

    3.7 The Computational Origins of the Filter

    We definen

    kx

    (note the super minus) to be our aprioristate estimate at step

    kgiven knowledge of the process prior to step k, andn

    kx to be our aposterioristate

    estimate at step k given measurementkz . We can then define apriori and a posteriori

    estimate errors as

    kkk xxe , and

    kkk xxe .

    16

  • 8/2/2019 Skf Final Documentation

    17/80

    The a prioriestimate error covariance is then

    ],eE[eP Tkkk = (3.5)

    and the a posterioriestimate error covariance is

    ].eE[eP Tkkk = (3.6)

    In deriving the equations for the Kalman filter, we begin with the goal of finding an

    equation that computes an a posteriori state estimate kx as a linear combination of an a

    prioriestimatekx and a weighted difference between an actual measurement kz and a

    measurement prediction kxH as shown below in (3.7). Some justification for (3.7) is

    given in The Probabilistic Origins of the Filter found below.

    )xHK(zxx kkkk += (3.7)

    The difference )xH(z kk in (1.7) is called the measurement innovation, or the

    residual. The residual reflects the discrepancy between the predicted measurementkxH

    and the actual measurementkz .A residual of zero means that the two are in complete

    agreement.[22]

    The mn matrix K in (3.7) is chosen to be the gain or blending factor that

    minimizes the aposteriorierror covariance (3.6). This minimization can be accomplished

    by first substituting (3.7) into the above definition for ke , substituting that into (3.6),

    performing the indicated expectations, taking the derivative of the trace of the result with

    respect toK, setting that result equal to zero, and then solving forK

    1T

    k

    T

    kk R)H(HPHPK +=

    17

  • 8/2/2019 Skf Final Documentation

    18/80

    RHHP

    HPT

    k

    T

    k

    +=

    (3.8)

    Looking at (1.8) we see that as the measurement error covariance approaches zero,the gainKweights the residual more heavily. Specifically,

    Another way of thinking about the weighting byKis that as the measurement error

    covariance R approaches zero, the actual measurementkz is trusted more and more,

    while the predicted measurementkxH is trusted less and less. On the other hand, as the a

    priori estimate error covariancekP approaches zero the actual measurement kz is trusted

    less and less, while the predicted measurementkxH is trusted more and more.

    3.8 The Probabilistic Origins of the Filter

    The justification for (3.7) is rooted in the probability of the aprioriestimatekx

    conditioned on all prior measurements kz (Bayes rule). For now let it suffice to point out

    that the Kalman filter maintains the first two moments of the state distribution.[22]

    kk x]E[x =

    .P])x)xxE[(x kT

    kkkk =

    The aposterioristate estimate (3.7) reflects the mean (the first moment) of the state

    distribution it is normally distributed if the conditions of (3.3) and (3.4) are met. The a

    posterioriestimate error covariance (3.6) reflects the variance of the state distribution (the

    second non-central moment). In other words,

    18

  • 8/2/2019 Skf Final Documentation

    19/80

    )])x)xxE[x],N(E[x)zp(x Tkkkkkkk

    ).P,xN( k=

    3.9 The Discrete Kalman Filter Algorithm

    We will begin this section with a broad overview, covering the high-level

    operation of one form of the discrete Kalman filter (see the previous footnote). After

    presenting this high-level view, we will narrow the focus to the specific equations and their

    use in this version of the filter.[20]

    The Kalman filter estimates a process by using a form of feedback control: the filter

    estimates the process state at some time and then obtains feedback in the form of (noisy)

    measurements. As such, the equations for the Kalman filter fall into two groups: time

    update equations and measurement update equations. The time update equations are

    responsible for projecting forward (in time) the current state and error covariance estimates

    to obtain the aprioriestimates for the next time step. The measurement update equations

    are responsible for the feedbacki.e. for incorporating a new measurement into the a

    prioriestimate to obtain an improved aposterioriestimate.[20]

    The time update equations can also be thought of as predictorequations, while the

    measurement update equations can be thought of as correctorequations. Indeed the final

    estimation algorithm resembles that of a predictor-corrector algorithm for solving

    numerical problems as shown below in Figure 3.3.

    19

  • 8/2/2019 Skf Final Documentation

    20/80

    Figure 3.3. The ongoing discrete Kalman filter cycle.

    The time updateprojects the current state estimate ahead in time. The measurement

    updateadjusts the projected estimate by an actual measurement at that time.

    The specific equations for the time and measurement updates are presented below in Table

    3.1 and Table 3.2.

    (3.9)

    (3.10)

    Table 3.1: Discrete Kalman filter time update equations

    Again notice how the time update equations in Table 3.1 project the state and

    covariance estimates forward from time step k-1 to step . A andB are from (3.1), while Q

    is from (3.3). Initial conditions for the filter are discussed in the earlier references. [20]

    (3.11)

    (3.12)

    (3.13)

    Table 3.2: Discrete Kalman filter measurement update equations

    The first task during the measurement update is to compute the Kalman gain, .

    Notice that the equation given here as (3.11) is the same as (3.8). The next step is to

    actually measure the process to obtain , and then to generate an a posteriori state estimate

    by incorporating the measurement as in (3.12). Again (3.12) is simply (3.7) repeated here

    for completeness. The final step is to obtain an a posteriori error covariance estimate via

    (3.13).[20]

    20

  • 8/2/2019 Skf Final Documentation

    21/80

    After each time and measurement update pair, the process is repeated with the previous a

    posteriori estimates used to project or predict the new a priori estimates. This recursive

    nature is one of the very appealing features of the Kalman filterit makes practical

    implementations much more feasible than an implementation of a Wiener filter which is

    designed to operate on all of the data directlyfor each estimate. The Kalman filter instead

    recursively conditions the current estimate on all of the past measurements. Figure 3.4

    below offers a complete picture of the operation of the filter, combining the high-level

    diagram of Figure 3.3 with the equations from Table 3.1 and Table 3.2. [20]

    3.10 Filter Parameters and Tuning

    In the actual implementation of the filter, the measurement noise covariance R is

    usually measured prior to operation of the filter. Measuring the measurement error

    covariance R is generally practical (possible) because we need to be able to measure the

    process anyway (while operating the filter) so we should generally be able to take some

    off-line sample measurements in order to determine the variance of the measurement noise.

    [8]

    The determination of the process noise covariance Q is generally more difficult as

    we typically do not have the ability to directly observe the process we are estimating.

    Sometimes a relatively simple (poor) process model can produce acceptable results if one

    injects enough uncertainty into the process via the selection of Q. Certainly in this case

    one would hope that the process measurements are reliable.

    In either case, whether or not we have a rational basis for choosing the parameters,

    often times superior filter performance (statistically speaking) can be obtained by tuning

    the filter parameters Q and R . The tuning is usually performed off-line, frequently with the

    help of another (distinct) Kalman filter in a process generally referred to as system

    identification.[7]

    In closing we note that under conditions where Q and R .are in fact constant, both

    the estimation error covariance kP and the Kalman gain kK will stabilize quickly and then

    remain constant (see the filter update equations in Figure 1-2). If this is the case, these

    21

  • 8/2/2019 Skf Final Documentation

    22/80

    parameters can be pre-computed by either running the filter off-line, or determining the

    steady-state value of kP .

    Figure 3.4 A complete picture of the operation of the Kalman filter, combining the

    high-level diagram of Figure 1-1 with the equations from Table 1-1 and Table 1-2.

    It is frequently the case however that the measurement error (in particular) does not

    remain constant. For example, when sighting beacons in our optoelectronic tracker ceiling

    panels, the noise in measurements of nearby beacons will be smaller than that in far-away

    beacons. Also, the process noise Q is sometimes changed dynamically during filter

    operationbecoming kQ in order to adjust to different dynamics. For example, in the

    case of tracking the head of a user of a 3D virtual environment we might reduce the

    magnitude of kQ if the user seems to be moving slowly, and increase the magnitude if the

    dynamics start changing rapidly. In such cases kQ might be chosen to account for both

    uncertainty about the users intentions and uncertainty in the model.[7]

    3.11Conclusion

    Filtering refers to estimating the state vector at the current time, based upon all past

    measurements. Prediction refers to estimating the state at a future time. Smoothing means

    estimating the value of the state at some prior time, based on all measurements taken up to

    22

  • 8/2/2019 Skf Final Documentation

    23/80

    the current time. An estimate, x , is the computed value of a quantity, x, based upon a set of

    measurements, y.

    Chapter-4

    The Extended Kalman Filter (EKF)

    4.0 Introduction

    In estimation theory, the extended Kalman filter (EKF) is the nonlinearversion of

    the Kalman filterwhich linearizes about the current mean and covariance. In the extended

    Kalman filter (EKF), the state transition and observation models need not be linear

    functions of the state but may instead be non-linear functions.[17]

    4.1 The Process to be estimated

    The Kalman filter addresses the general problem of trying to estimate the state

    nx of a discrete-time controlled process that is governed by a linear stochastic

    difference equation. But what happens if the process to be estimated and (or) the

    measurement relationship to the process is non-linear? Some of the most interesting and

    successful applications of Kalman filtering have been such situations. A Kalman filter that

    linearizes about the current mean and covariance is referred to as an extendedKalmanfilter

    or EKF.[17]

    In something akin to a Taylor series, we can linearize the estimation around the

    current estimate using the partial derivatives of the process and measurement functions to

    compute estimates even in the face of non-linear relationships. To do so, we must begin by

    modifying some of the material presented in Kalman filter. Let us assume that our process

    again has a state vectorn

    x , but that the process is now governed by the non-linearstochastic difference equation .[17]

    ,)w,u,f(xx 1k1k1kk = (4.1)

    with a measurement nz that is

    23

    http://en.wikipedia.org/wiki/Estimation_theoryhttp://en.wikipedia.org/wiki/Nonlinearhttp://en.wikipedia.org/wiki/Kalman_filterhttp://en.wikipedia.org/wiki/Estimation_theoryhttp://en.wikipedia.org/wiki/Nonlinearhttp://en.wikipedia.org/wiki/Kalman_filter
  • 8/2/2019 Skf Final Documentation

    24/80

    ),v,h(xz kkk = (4.2)

    where the random variables kw and kv again represent the process and measurement

    noise as in (1.3) and (1.4). In this case the non-linearfunction f in the difference equation

    (2.1) relates the state at the previous time step k-1 to the state at the current time step k . It

    includes as parameters any driving function 1ku and the zero-mean process noise kw .

    The non-linear function h in the measurement equation (4.2) relates the state kx to the

    measurement kz .[17]

    In practice of course one does not know the individual values of the noise kw and

    kv at each time step. However, one can approximate the state and measurement vector

    without them as

    ,0)u,xf(x~ 1k1kk = (4.3)

    and

    ,0),x~h(z~ kk = (4.4)

    where kx is some a posterioriestimate of the state (from a previous time step k).

    It is important to note that a fundamental flaw of the EKF is that the distributions

    (or densities in the continuous case) of the various random variables are no longer normal

    after undergoing their respective nonlinear transformations. The EKF is simply an adhoc

    state estimator that only approximates the optimality of Bayes rule by linearization .[17]

    4.2 The Computational Origins of the Filter

    To estimate a process with non-linear difference and measurement relationships, we

    begin by writing new governing equations that linearize an estimate about (4.3) and (4.4),

    24

  • 8/2/2019 Skf Final Documentation

    25/80

    ,Ww)xA(xx~x 1k1k1kkk ++ (4.5)

    kkkkk Vv)xH(xz~z ++ (4.6)

    Where

    kx And kz are the actual state and measurement vectors,

    kx~

    And kz~

    are the approximate state and measurement vectors from (4.3) and (4.4),

    kx~

    Is an a posterioriestimate of the state at step k,

    The random variableskw and kv represent the process and measurement noise

    as in (4.3) and (4.4).[24]

    A is the Jacobian matrix of partial derivatives of f with respect tox, that is

    ,0),u,x(x

    fA 1k1k

    [j]

    [i]

    j][i,

    =

    Wis the Jacobian matrix of partial derivatives of f with respect to w,

    ,0),u,x(x

    fW 1k1k

    [j]

    [i]

    j][i,

    =

    His the Jacobian matrix of partial derivatives of h with respect tox,

    ,0),x~(

    x

    hH k

    [j]

    [i]

    j][i,

    =

    Vis the Jacobian matrix of partial derivatives of h with respect to v,

    ,0),x

    ~

    (x

    h

    V k[j]

    [i]

    j][i,

    =

    Note that for simplicity in the notation we do not use the time step subscript k with the

    Jacobians A, W, H, and V, even though they are in fact different at each time step. [24]

    Now we define a new notation for the prediction error,

    25

  • 8/2/2019 Skf Final Documentation

    26/80

    ,xxe~ kkxk (4.7)

    and the measurement residual,

    ,zze~ kkzk (4.8)

    Remember that in practice one does not have access tokx in (4.7), it is the actual

    state vector, i.e. the quantity one is trying to estimate. On the other hand, one doeshave

    access tokz in (2.8), it is the actual measurement that one is using to estimate kx . Using

    (4.7) and (2.8) we can write governing equations for an errorprocessas[24]

    ,)xA(xe~ k1k1kxk + (4.9)

    ,e~He~ kxkzk + (4.10)

    Wherek and

    k represent new independent random variables having zero mean

    and covariance matricesT

    WQW andTVQV , with Q and R as in (3.3) and (3.4)

    respectively.[24]

    Notice that the equations (4.9) and (4.10) are linear, and that they closely resemble thedifference and measurement equations (3.1) and (3.2) from the discrete Kalman filter. This

    motivates us to use the actual measurement residualkz

    e in (4.8) and a second

    (hypothetical) Kalman filter to estimate the prediction errorkz

    e given by (4.9). This

    estimate, call it, could then be used along with (4.7) to obtain the a posteriori state

    estimates for the original non-linear process as

    .exx kkk += (4.11)The random variables of (4.9) and (4.10) have approximately the following

    probability distributions (see the previous footnote):[24]

    ])eeE[N(0,)ep(T

    xkxx kk

    )WWQN(0,)p( Tkk

    26

  • 8/2/2019 Skf Final Documentation

    27/80

    )VVRN(0,)p( Tkk

    Given these approximations and letting the predicted value ofke be zero, the Kalman

    filter equation used to estimateke

    is

    kzkkeKe =

    (4.12)

    By substituting (4.12) back into (4.11) and making use of (4.8) we see that we do

    not actually need the second (hypothetical) Kalman filter:

    kzkkkeKxx +=

    )z(zKxkkkk

    += (4.13)

    Equation (4.13) can now be used for the measurement update in the extended

    Kalman filter, withk

    x andk

    z coming from (4.3) and (4.4), and the Kalman gaink

    K

    coming from (3.11) with the appropriate substitution for the measurement error covariance.

    [24]

    The complete set of EKF equations is shown below in Table 4.1 and Table 4.2.

    Note that we have substituted

    kx for

    kx to remain consistent with the earlier super

    minus a priori notation, and that we now attach the subscript k to the Jacobians A,W, H,

    and V, to reinforce the notion that they are different at (and therefore must be recomputed

    at) each time step.

    (4.14)

    (4.15)

    Table 4.1: Extended Kalman filter time update equations

    As with the basic discrete Kalman filter, the time update equations in Table 4.1

    project the state and covariance estimates from the previous time step k-1 to the current

    27

  • 8/2/2019 Skf Final Documentation

    28/80

    time step k. Again f in (4.14) comes from (4.3), kA and kW are the process Jacobians at

    step k, and kQ is the process noise covariance (4.3) at step k.[24]

    (4.16)

    (4.17)

    (4.18)

    Table 4.2: Extended Kalman filter, the measurement update equations

    As with the basic discrete Kalman filter, the measurement update equations in

    Table 4.2 correct the state and covariance estimates with the measurementkz . Again h in

    (2.17) comes from (2.4),k

    H and Vare the measurement Jacobians at step k, andk

    R is the

    measurement noise covariance (1.4) at step k. (Note we now subscript R allowing it to

    change with each measurement.)

    The basic operation of the EKF is the same as the linear discrete Kalman filter as

    shown in Figure 3.3. Figure 4.1 below offers a complete picture of the operation of the

    EKF, combining the high-level diagram of Figure 3.1 with the equations from Table 4.1

    and Table 4.2.

    28

  • 8/2/2019 Skf Final Documentation

    29/80

    Figure 4.1 A complete picture of the operation of the extended Kalman filter,

    combining the high-level diagram of Figure 3.1 with the equations from Table 2-1 and

    Table 2-2.

    An important feature of the EKF is that the Jacobian kH in the equation for the

    Kalman gain kK serves to correctly propagate or magnify only the relevant component of

    the measurement information. For example, if there is not a one-to-one mapping between

    the measurement and the state via , the Jacobian kH affects the Kalman gain so that it only

    magnifies the portion of the residual ,0)xh(zkk

    that does affect the state. Of course if

    over allmeasurements there is nota one-to-one mapping between the measurementkz and

    the state via h, then as you might expect the filter will quickly diverge. In this case the

    process is unobservable.[24]

    4.3 Conclusion:

    The non linear extended Kalman filter (EKF) uses the standard EKF formulation to achieve

    nonlinear state estimation. Inside, it uses the complex step Jacobian to linearizes the

    nonlinear dynamic system. The linearizes matrices are then used in the Kalman filter

    calculation.

    29

  • 8/2/2019 Skf Final Documentation

    30/80

    Chapter-5

    Schmidt-Kalman Filter (SKF)

    5.0 Introduction

    Navigation errors are a performance-limiting factor for target tracking as ranging

    measurements increase in accuracy. In the past, a sensor platform is typically assumed to

    be at a perfectly known location when formulating the measurement equations for trackingfilters. Such an assumption is reasonable when the tracking goal is to determine the relative

    location of a target for such applications as a homing missile. However, for other

    applications such as sensor data fusion or weapons cueing, the sensor/platform location

    accuracy is critical for high speed maneuvering target tracking as the targets relative

    location has to be transferred into a global reference frame and the sensor navigation errors

    get magnified into the target position solution. For tracking mission success, it requires the

    use of such techniques as the Schmidt-Kalman Filter (SKF) [9] to account for navigation

    errors.

    Without accounting for navigation errors, regular Kalman filters tend to be

    optimistic in that the covariance matrix reaches a steady-state value (if not diverged) far

    below the actual mean squared errors. Heuristic techniques such as the -- filter are used

    to prevent divergence and boost filter gain by adjusting Q and R covariance parameters.

    It has been seen that residual biases in radar measurements could be mitigated

    using the Schmidt-Kalman filter. In the setting of tracking ballistic missiles in midcourse,

    ground-based electronically scanned array radars (ESA) were considered. The range and

    off-bore sight angles of a target measured in the radar frame are transformed into relative

    position measurements in a local east-north-up (ENU) frame. It was assumed that the

    30

  • 8/2/2019 Skf Final Documentation

    31/80

    measurement process is subject to frame transformation errors (in terms of Euler angles)

    and unknown but constant measurement errors (in terms of biases and scale factors) in

    addition to random noise. It was further assumed that a bias estimator was available to

    remove the angular and measurement biases. Further, zero-mean Gaussian residual biases

    could be mitigated so long as the residual bias covariance matrix was provided by the bias

    estimator.[6,7]

    The Schmidt-Kalman filter (SKF) is formulated for target tracking with navigation

    errors. The Schmidt Kalman filter does not estimate the navigation errors explicitly (it

    could if external reference data were available) but rather takes into account (i.e., consider)

    the navigation error covariance, provided by an on-board navigation unit, into the tracking

    filter formulation. By exploring the structural navigation errors, the SKF are not only more

    consistent but also produces smaller mean squared errors than regular Kalman filters. [6]

    The conventional tracking filter is formulated in Section 1. The design of the SKF

    using the consider covariance in the presence of navigation errors is presented in Section 2.

    The design of the SKF using the consider covariance in the presence of navigation errors is

    presented in Section 3. The recursive calculation of performance metrics in terms of root

    mean squared (RMS) errors and root sample covariance trace (RSCT) is described in

    Section 4. Monte Carlo Simulation is presented in Section 5 the results of SKF for target

    tracking in the presence of navigation errors in Section 6. Chapter 5 concludes the Thesis

    with an outline of future work.[7]

    The steps of Schmidt Kalman Filter Algorithm

    1. Target Tracking Formulation

    2. Consider Covariance in Target Filter

    3. Recursive Calculation of Matrices

    4. Simulation of SKF

    5. Results of SKF for Target Tracking in the presence of Navigation Errors

    31

  • 8/2/2019 Skf Final Documentation

    32/80

    5.1 Target Tracking Formulation

    The block diagram of the tracking system is shown in Figure 5. It comprises three

    modules: adaptive background, measurement and Kalman filtering. The adaptive

    background module produces the foreground pixels of each video frame and passes this

    evidence to the measurement module. The measurement module associates the foreground

    pixels to targets, initializes new ones if necessary and manipulates existing targets by

    merging or splitting them based on an analysis of the foreground evidence. The existing or

    new target information is passed to the Kalman filtering module to update the state of the

    tracker, i.e. the position, velocity and size of the targets. The output of the tracker is thestate information which is also fed back to the adaptive background module to guide the

    spacio-temporal adaptation of the algorithm[7]

    32

  • 8/2/2019 Skf Final Documentation

    33/80

    Figure 5.1: Block diagram of the complete feedback tracker architecture.

    5.1.1 Adaptive Background Module

    The targets of the proposed system (vehicles and humans) are mostly moving. The

    changes in the video frames due to the movement are used to identify and segment the

    foreground (pixels of the moving targets) from the background (pixels without movement).

    If a background image were available, this segmentation is simply the difference of the

    current frame from the background image. The foreground pixels thus obtained are readily

    grouped into target regions. A static image of the empty scene viewed by the camera can be

    used for background. Unfortunately this is not practical and adaptive background

    approaches are adopted primarily for two reasons: First, such an empty scene image might

    not be available due to system setup. Secondly and most importantly, background

    (outdoors and indoors) also changes: Natural light conditions change slowly as time goes

    by; the wind causes swaying movements of flexible background object (e.g. foliage);

    fluorescent light flickers at the power supply frequency; All these changes need to be learnt

    into an adaptive background model. [7]

    5.1.2 Stauffers Algorithm

    Stauffers adaptive background algorithm is capable of learning such changes with

    so different speeds of change by learning into the background any pixel, whose color in the

    current frame resembles the colors that this pixel often has. So no changes, periodic

    changes or changes that occurred in the distant past lead to pixels that are considered

    background. To do so, a number of weighted Gaussians model the appearance of different

    colors in each pixel. The weights indicate the amount of time the modeled color is active in

    that particular pixel. The mean is a three dimensional vector indicating the color modeled

    for that pixel, while the covariance matrix indicates the extend around the mean that a color

    of that pixel is to be considered as similar to the one modeled. Colors in any given pixel

    similar to that modeled by any of the Gaussians of that pixel lead to an update of that

    Gaussian, an increase of its weight and a decrease of all the weights of the other Gaussians

    of that pixel. Colors not matching any of the Gaussians of that pixel lead to the introduction

    33

  • 8/2/2019 Skf Final Documentation

    34/80

    of a new Gaussian with minimum weight. Hence the possible updates of the weight of the

    i-th Gaussian of the pixel located at (x,y) at time tare

    Where ais the learning rate.

    The original algorithm and most of the modifications lead to a binary decision for

    each pixel: foreground or background in the (PPM) is used instead. This is a map of the

    same dimension as the frames with a value at each location (x,y) equal to the weight of the

    Gaussian matching the current color of the pixel at (x, y). Small PPM values indicateforeground objects, while large indicate background. The foreground/background threshold

    is left unspecified though. [7]

    The drawback of the existing Stauffers algorithm is that stationary foreground

    objects tend to fade in the background with rate. Small rates fade foreground objects

    slowly, but are also slow in adapting to the background changes, like the motion of a chair.

    Large rates favor background adaptation but tend to fade a target into the background when

    it stops. This fading progressively destroys the region of the tracked object, deforms its

    perceived shape and finally leads to loosing track of the object altogether. When the target

    resumes moving, foreground pixels will be marked only at the locations not previously

    occupied by the stationary target. When the target has fairly uniform coloration, this can

    lead to track loss even in the presence of movement. [7]

    The propose a feedback tracking architecture in order to addresses these problems.

    The state of the Kalman tracker contains the ellipse that describes every target. The

    learning rate is modified in elliptical regions around these targets. Thus instead of a

    constant value, a spacio-temporal adaptation of the learning rate is used.

    34

  • 8/2/2019 Skf Final Documentation

    35/80

    This delays fading of the targets and depending on the selection of the small

    learning rate and the motion of the targets can be sufficient. In some cases though where

    targets stay put for very long periods, even the small learning rate will gradually fade them

    into the background. If this starts happening (the target becomes smaller while its mobility

    is small), the normal weight update mechanism is bypassed. The weight of the current

    Gaussian is decreased and that of all the rest is increased with a rate that is inversely

    proportional to the mobility of the target, as this is estimated from the state of the Kalman

    tracker for this particular target. This fading prevention mechanism is not always in effect;

    it is only activated when targets are small and rather immobile, since the tampering of the

    weights is very forceful and affects the whole elliptical disk around the target, regardless if

    the pixel is actually foreground or not. [7]

    The second major proposed modification of Stauffers algorithm addresses extreme

    flickering situations often encountered in night vision cameras. In such scenes the PPM

    needs to be bounded by a very low threshold in order not to consider flickering pixels as

    foreground. The threshold on the other hand tends to discard actual foreground pixels as

    well. The proposed solution is to adapt the threshold Tin a spacio-temporal fashion similar

    to the learning rate in (15) .i.e.

    This way flickering pixels are avoided far from the targets, while the targetsthemselves are not affected. The penalty of this strategy is the delayed detection of new

    very small targets. [7]

    These proposed feedback mechanisms on the learning rate lead to robust

    foreground regions regardless of the flickering in the images or the lack of target mobility,

    while they do not affect the adaptation of the background around the targets.When such

    35

  • 8/2/2019 Skf Final Documentation

    36/80

    flickering and mobility conditions occur, the resulting PPM is more suitable for target

    region forming that the original version of . The forming of target regions is the goal of the

    measurement module, detailed next. [7]

    5.1.3 Measurement Module

    The measurement module finds foreground segments, assigns them to known

    targets or initializes new ones and checks targets for possible merging or splitting. The

    information for new targets or targets to be updated is passed to the Kalman module.

    The measurement process begins by processing the adaptively threshold PPM to obtain

    foreground segments. This involves shadow detection based on, dilation, filling of any

    holes in the segments and erosion. The obtained segments are checked for possible merging

    based on their Mahalanobis distance and are further considered only if they are large

    enough. These segments are associated to targets based on their Mahalanobis distance from

    the targets. Non-associated segments generate new target requests to the Kalman module.

    The targets are subsequently checked for possible merging based on how similar

    they are. Since we are using a Kalman tracker, the targets are described by two-

    dimensional Gaussians . If two such Gaussians are too similar, the targets are merged.

    Finally, very large targets are checked for splitting.[7]

    This is necessary as, for example, two monitored people can be walking together

    and then separate their tracks. Splitting is performed using the k-means algorithm on the

    pixels of the foreground segment comprising the target. Two parts are requested from the

    k-means algorithm. These parts are subsequently checked to determine if they are distinct.

    For this, the minimum Mahalanobis distance of the one with respect to the other is used. If

    the two parts are found distinct, then they form two targets. The one part of the foreground

    evidence is used to update the existing target, while the other part is used to request a new

    target from the Kalman tracker.

    All the found targets are then processed to identify the number of bodies in them

    and detect the heads. This is done by processing the height of the target as a function of its

    column number. The height is measured form the bottom of the box bounding the target.

    The processing identifies peaks that correspond to heads and valleys that correspond to

    36

  • 8/2/2019 Skf Final Documentation

    37/80

    points that the target can be split into more than one body. The process is illustrated in

    Figure 5 and works well with upright people. Finally, heads are found by examining the

    smoothed derivative of the width of the detected peaks. As at the shoulders the width of the

    body increases rapidly, this point can be easily detected. If the lighting conditions are

    normal, the face position can be refined inside the head region using skin color histograms.

    [7]

    Fig. 5.2: Processing a target to extract bodies and heads. (a) Scene of a target with the

    two associated bodies and heads marked. (b) PPM of the target and 2D Gaussian

    approximation. (c) Target height profile (blue line) used to identify the peaks and

    valleys that correspond to the head tops (red circles) and body splitting points

    (vertical red lines) respectively. Head estimates are also marked with black lines.

    5.1.4 Kalman Tracking Module

    The Kalman module maintains the states of the targets. It creates new targets should

    it receive a request from the measurement module and performs measurement update based

    on the foreground segments associated to the targets. The states of the targets are fed back

    to the adaptive background module to adapt the learning rate and the threshold for the PPM

    linearization. Every target is approximated by an elliptical disc, i.e. can be described by a

    single Gaussian. This facilitates the use of a Kalman tracker. The target states are seven-

    37

  • 8/2/2019 Skf Final Documentation

    38/80

    dimensional; they comprise of the mean of the Gaussian describing the target (horizontal

    and vertical components), the velocity of the mean (horizontal and vertical components)

    and the three independent terms of the covariance matrix.

    The prediction step uses a loose dynamic model of constant velocity for the update

    of the mean position and velocity. As for the update of the three covariance terms, their

    exact model is non-linear, hence cannot be used with the Kalman tracker; instead of using

    linearization and an extended Kalman tracker, the covariance terms are modeled as

    constant. The variations of the velocity and the covariance terms are permitted by the state

    update variance term. This loose dynamic model permits arbitrary movement of the targets.

    It is very different to the more elaborate models used for tracking aircraft. [7]

    Aircraft can perform a limited set of maneuvers that can be learned and be expected

    by the tracking system. Further, flying aircraft can be modeled as rigid bodies thus strict

    and multiple dynamic models are appropriate and have been used extensively in Interacting

    Multiple Model Kalman trackers. Unlike aircraft, street vehicles and especially humans

    have more degrees of freedom for their movement which includes apart from speed and

    direction changes obstacles arbitrarily, rendering the learning of a strict dynamic model

    impractical. A strict dynamic model in this case can mislead a tracker to a particular track

    even in the presence of contradicting evidence.[7]

    5.1.5 Target tracking formulation derivation

    Consider a linear time-invariant discrete-time dynamic system together with its

    measurement as

    kkk1k wBuAxx ++=+ (5.1a)

    kkk vHxy += (5.1b)

    where the subscript kis the time index, x is the state vector, u is a known input, y is the

    measurement, and w and v are state and measurement noise processes, respectively. It is

    implied that all vectors and matrices have compatible dimensions, which are omitted for

    simplicity.[23]

    38

  • 8/2/2019 Skf Final Documentation

    39/80

    The goal is to find an estimate, denoted by kx , of kx given the measurements up

    to time k, denoted by }.,........y{yy k0k = Under the assumptions that the state and

    measurement noises are uncorrelated zero-mean white Gaussian with w ~ N{0, Q} and v

    ~ N{0, R} where Q and Rare positive semi-definite covariance matrices, the Kalman filter

    provides an optimal estimator in the form of }yE{xx kkk = [1, 2, 3, 4].

    Starting from an initial estimate }E{xx 00 = and its estimation error covariance

    })x)(xxE{(xP T00000 = where the superscript T stands for matrix transpose, the

    Kalman filter equations specify the propagation of kx and kP over time as well as the

    update of kx and kP by measurement ky as

    kk1k BuxAx +=+ (5.2a)

    QAAPPT

    k1k +=+ (5.2b)

    )xH(yKxx1k1k1k1k1k +++++ +=

    (5.2c)

    1k1k1k PH)K(IP +++ = (5.2d)

    1

    1k

    T

    1k1k SHPK+++ = (5.2e)

    RHPHS T1k1k += ++ (5.2f)

    where 1kx + and 1kP + are the predicted state and prediction error covariance,

    respectively, and kS is the measurement prediction error covariance.[24]

    39

  • 8/2/2019 Skf Final Documentation

    40/80

    In practice, however, the target measurements are nonlinear in nature such as range and

    bearing. Consider a general nonlinear measurement taken from the sensor at sx to a target

    at tx as

    v)x,f(xz st += (5.3)

    Where f (, ) is the nonlinear measurement equation and v is the measurement error

    assumed to be a zero-mean Gaussian N (0, 2).

    Assume that the sensor location sx is known perfectly and an initial estimate of

    the unknown target location, denoted by 0x , is available. The nonlinear measurement can

    be liberalized around this initial estimate as

    v))x(xh)x,f(xz 0tTst ++ (5.4a)

    0

    t

    n

    t

    2

    t

    1

    T

    x

    z.........

    x

    z

    x

    zh

    x

    = (5.4b)

    In terms of measurement prediction error, the equation can be further written in terms of

    0

    t xxx = as:

    vxhxh)x,f(xzz~T

    0

    Ts

    0 +=+= (5.5)

    ForMmeasurements, the linear zed measurements can be put into a vector format as:

    vHxz~ += (5.6a)

    ]z~.........z~z~[z~ M21T = (5.6b)

    ].........vv[vv M21T = (5.6c)

    40

  • 8/2/2019 Skf Final Documentation

    41/80

    =

    T

    M

    T

    T

    h

    h

    h

    H 2

    1

    (5.6d)

    The use of linearzed measurements (5.6a) is conformal with the linear formulation (5.1b)

    and its application to (5.2) leads to the extended Kalman filter (EKF) [1, 2, 3, 4].

    In the two dimensional setting with ranging measurements, let x and y be the coordinate

    axes. Then,T]y,[xx ttt = and Tsss ]y,[xx = . The nonlinear equation (5.3) for range

    is (ignoring the noise terms).[24]

    2st2stst )y(y)x(x)x,f(xr +== (5.7a)

    The corresponding linearized equation around an initial estimate 0x oftx is

    )x(xh)x,f(xr 0tTs

    0 + (5.7b)

    with

    =

    =)sin(

    )cos(

    r

    yy

    r

    xx

    s

    0

    s

    0

    h (5.7c)

    Where is the angle relative to thex-axis. Eq. (7c) affords an important interpretation that

    the observation matrix is made of the line of sight (LOS) vector from the sensor to target .[12]

    41

  • 8/2/2019 Skf Final Documentation

    42/80

    5.2 Consider Covariance in Tracking Filter

    In the above formulation, the sensor location xs is assumed to be known perfectly,

    which is approximately true when the ranging errors are much larger than the sensor

    positioning errors. However, when the ranging errors are comparable to the sensor

    positioning errors, it is required to account for navigation errors explicitly to ensure

    tracking performance. [24]

    Assume that an onboard navigation system provides the sensors location at xs,

    which is subject to an unknown instantaneous error xs and a location error covariance Ps.

    Further assume that the sensor provides range measurements given by

    r

    2

    st nxxr += (5.8)

    where tx is the target location, ||||2 stands for the 2-norm, andr

    n is the measurement

    noise, which is assumed to be of zero mean with a variance of 2 .

    Given the measured sensor location sx and predicted target location tx , the predicted

    range is obtained as

    2

    st xxr = (5.9)

    The range equation is expanded around the measured sensor location and predicted target

    location as

    rssT

    s

    ttT

    tn)x(x)

    xr()x(x)

    xr(rr +

    +

    += (5.10a)

    rsT

    s

    tT

    tnx)

    x

    r(x)

    x

    r(r +

    +

    += (5.10b)

    42

  • 8/2/2019 Skf Final Documentation

    43/80

    5.2.1 Without Sensor Position Errors

    In conventional EKF formulation as in (5.4a) and (5.7a), the sensor position errors

    are ignored. The resulting linearized measurement equation is

    rttT nxhrrr +== (5.11a)

    t

    t

    x

    rh

    = (5.11b)

    5.2.2 With Sensor Position Errors

    To account for the sensor positioning errors, consider three cases: zero mean,

    constant bias, and non-stationary.[23]

    Case 1: Zero Mean

    a) Zero-mean normalization

    When zero-mean normalization is applied, all the data in each profile are slid

    vertically so that their average is zero:

    Fig 5.3 Zero-Mean Normalization

    43

  • 8/2/2019 Skf Final Documentation

    44/80

    The blue profile was obtained from the red one by a vertical translation so that the

    average of the profile be zero.[23]

    (b) Unit-norm normalization

    When unit-norm normalization is applied, all the data in each profile are multiplied

    so that the length of the associated vector be equal to 1. The length of the vector is the

    square root of the sum of squares of all values.

    Fig 5.4 Normalization toward 1

    A well-behaved navigation system provides a navigation solution whose error is

    uncorrelated (with itself over time and with the target state estimation error), with a zero

    meanE{xs} = 0 and covariance Ps. The concept of considercovarianceas used in is the

    central idea of the Schmidt-Kalman filter. The consider covariance for the measurement

    equation (10) is given by

    RhPhhPhS sssTtttT ++= (5.12)

    where Pt is the target state error covariance, Ps is the sensor location error covariance, andR is the measurement error covariance.[23]

    44

  • 8/2/2019 Skf Final Documentation

    45/80

    Case 2: Constant Bias

    However, if the measurement exhibits a constant bias, the bias can be estimated

    directly if an external reference measurement is available. Alternatively, it can be estimated

    by a separate bias estimator. Any residual bias is accounted for by the consider covariance

    as suggested in[6, 7].

    To derive the Schmidt-Kalman filter, the bias is taken as an augmented state in the

    filter formulation for the state and covariance propagation. However, the update equations

    for the augmented state as nuisance parameters are discarded. [6]

    More specifically, the augmented equations are given by

    +

    +

    =

    +

    +

    0

    wu

    0

    B

    x

    I0

    0A

    xk

    k

    k

    k

    1k

    1k

    (5.13a)

    [ ]k

    k

    k

    stkv

    xHHy +

    = (5.13b)

    The bias kis not estimated but its effect on xk is accounted for through a guessabout the bias in terms of its covariance matrix Cov{k} = Ps kand its correlation with the

    target state estimate defined as

    T

    k)kx

    kE{(xts

    kP = , 0P ts0 = (5.14)

    The augmented state covariance matrix is given by

    =

    s

    k

    Tts

    k

    ts

    k

    t

    kk

    P)(P

    PP (5.15)

    The Schmidt-Kalman filter is then obtained by applying the Kalman filter to the

    augmented equations (5.13) with the augmented covariance (5.15). The time propagation

    45

  • 8/2/2019 Skf Final Documentation

    46/80

    equations of the target state and its error covariance remain the same as (5.2a) and (5.2b).

    The time propagation of the bias covariance is unity:s

    k

    s

    1k PP =+ .The time propagation of

    the cross-correlation is given by

    ts

    k

    ts

    1k APP =+ (5.16)

    The measurement update equation of the target state still has the same form as

    (5.2c) but the gain matrix (5.2e) is different as given below

    1

    1k

    sTts

    1k

    tTt

    1k1k )SHPHP(K++++ += (5.17a)

    where the predicted measurement error covariance 1ks + is given by

    )HPHHP(HStTtsT

    1k

    stTt

    1k

    t

    1k +++ +=

    1ktTsT

    1k

    ssTst

    1k

    t RHPHHPH +++ +++ (5.17b)

    However, the measurement update equation of the target state error covariance is given by

    tsT

    1k

    s

    1k1k

    t

    1k1k PHKP)HK(IP +++++ = (5.18a)

    The measurement update of the cross-correlation is given by

    )PHP(HKPP s 1ks

    1k

    t

    1k

    ts

    1k

    ts

    1k +++++ += (5.18b)

    From (5.17b), it can be seen that if there is no correlation, that is, 0P tsk = , it

    reduces to (5.12), which is a form of covariance inflation. This may occur when the sensor

    position is subject to zero-mean uncorrelated errors. [24]

    In general, however, even starting with a null initialization, that is, 0Pts

    k = as in

    (5.14), the cross-correlation for non-zero bias is not zero as seen from (5.18b).

    46

  • 8/2/2019 Skf Final Documentation

    47/80

    5.3 Recursive Calculation of Matrices

    Monte Carlo simulation will be used to evaluate the performance of a Schmidt-

    Kalman filter vs. a conventional Kalman filter in the presence of navigation errors. The

    performance metrics will be root mean squared (RMS) errors when the state truth is known

    and the root sample covariance trace (RSCT) when the state truth is not known. [24]

    Assume that a vector quantity x(k) ~ N(, ) at time k is estimated N times as in a

    Monte Carlo simulation. The estimate is denoted by (k)x i for i = 1, , N. There are two

    ways to characterize the estimation errors. One way is to consider the sample mean and

    sample covariance defined as

    +

    =N

    1i

    iN (k)xN

    1(k)m

    (5.19a)

    =

    =N

    1i

    T

    NiNiN (k)]m(k)x(k)][m(k)x[N

    1(k)S (5.19b)

    need change. However, the approximation error is small for large N. The samplecovariance SN(k) does not tell how accurate the estimates are (i.e., how far away it is from

    the true value) but rather provides an indication about the spreading of the estimates around

    the sample mean mN(k) (i.e., its precision). Consider a bias estimate in the estimate.[24]

    x(k)(k)m(k)b NN = (5.20)

    The ellipses drawn with {mN(k), SN(k)} are often called confidenceellipses. The statistic

    u)(mPu)n(mT 1T2 = (5.21)

    follows the so-called Hotellings distribution with n-1 degrees of freedom [8] where n is

    the dimension ofx. It can be easily shown that the sample mean and covariance can be

    calculated recursively inside a Monte Carlo simulation loop as

    47

  • 8/2/2019 Skf Final Documentation

    48/80

    (k)xn

    1(k)m

    n

    1n(k)x

    n

    1(k)m n1n

    n

    1i

    in +

    == = (5.22)

    T

    ni

    n

    1inin (k)]m(k)x(k)][m(k)x[n

    1

    (k)S = = (5.23a)

    T

    nnnn1n (k)]m(k)x(k)][m(k)x[n

    1(k)S

    n

    1n+

    =

    T

    1nn1nn (k)]m(k)(k)][mm(k)[mn

    1n

    + (5.23b)

    In contrast to (19), instead of using the sample mean, another widely usedcovariance is calculated with respect to the true vectorx(k) if available as

    T

    i

    N

    1i

    iN x(k)](k)x[x(k)](k)x[N

    1(k)P =

    =

    (5.24)

    The square root of the trace ofP is the so-called root mean squared (RMS) errors

    defined as

    =

    ==N

    1i

    T

    iiNN x(k)](k)xx(k)][(k)x[N

    1(k)}trace{P(k)RMS (5.25)

    It is easy to show that the estimation error covariance PN(k) is related to the sample

    covariance SN(k) and the bias vectorbN(k) by

    T

    ii1nn x(k)](k)xx(k)][(k)x[n

    1

    (k)Pn

    1n

    (k)P +

    = (5.26)

    x(k)](k)x[x(k)](k)x[n

    1(k)RMS

    n

    1n(k)RMS i

    T

    i

    2

    1n

    2

    n +

    = (5.27)

    48

  • 8/2/2019 Skf Final Documentation

    49/80

    It is easy to show that the estimation error covariance (k)PN is related to the

    sample covariance (k)SN and the bias vector )(kbN by

    TNNN (k)bN(k)b(k)S(k)P += (5.28)

    Without knowing the truth x(k), the bias vector cannot be pre-calculated using

    (5.20). As a result, the sample mean mN(k) and sample covariance SN(k) cannot indicate the

    estimation accuracy. With the truth x(k), the two methods can be linked to each other via

    (28), thus being equivalent. [24]

    Instead of calculating the sample covariance itself, the trace may be evaluated,leading to the root sample covariance trace (RSCT), a scalar quantity similar to the RMS

    errors, as

    (k)}trace{S(k)RSCT nn = (5.29a)

    (k)RSCTn

    1n(k)RSCT 2 1n2

    n

    =

    (k)]m(k)x[(k)]m(k)x[n

    1nn

    T

    nn +

    (k)]m(k)[m(k)]m(k)[mn

    1-n1-nn

    T

    1-nn + (5.29b)

    5.4 Simulation Results and Analysis

    Three simulation examples are presented in this section wherein a more generalcase with both range and range rate measurements are considered:

    22 yx += (5.30a)

    49

  • 8/2/2019 Skf Final Documentation

    50/80

    yyxx

    += (5.30b)

    Taking partial derivatives of range and range rate measurements (5.30a) and

    (5.30b) with respect to

    s

    x and

    t

    x , retaining only the linear components, yields theobservation matrices sH and tH , which assume the same form. Without the subscript, it is

    given by[24]

    =

    y

    x

    y

    x

    y

    x

    y

    x

    H

    (5.31)

    Where

    x

    x

    =

    (5.32a)

    y

    y

    =

    (5.32b)

    0

    x

    =

    (5.32c)

    0y

    =

    (5.32d)

    3

    )xyyx(x

    x

    x

    +=

    (5.32e)

    3

    )yyyx(x

    y

    y

    +=

    (5.32f)

    x

    x =

    (5.32g)

    y

    y

    =

    (5.32h)

    50

  • 8/2/2019 Skf Final Documentation

    51/80

    6. Results and Discussions

    6.1 Simulation with Comparable Navigation Errors

    To analyze the effects of sensor location errors (navigation errors) on targeting

    accuracy, consider a sensor-target crossing scenario as shown in Figure 5.5. The sensor

    moves along thex-axis from (-10000 m, 0) at a constant speed of 100 m/s while the target

    moves along they-axis from (0, 20000 m) at a constant speed of -12 m/s.

    Fig. 6.1 Sensor-Target Crossing Scenario

    The initial state is

    Ts ]yyxx[x =

    [ ]T0m/s0m100m/s10000m= (6.1a)

    Tt]yyxx[x =

    [ ] T12m/s-20000m0m/s0m= (6.1b)

    The sensor provides range and range rate measurements with the measurement error

    variances of 210 2m and21.0

    2)/( sm , respectively.

    51

  • 8/2/2019 Skf Final Documentation

    52/80

    The sensor navigation errors are assumed to be comparable to the measurement

    errors with the variances of 210 2m and21.0

    2)/( sm , respectively. This represents a rather

    extreme case to emphasize the effects. A regular extended Kalman filter (EKF) is

    implemented based on the following discrete-time second-order kinematic model (nearlyconstant velocity)

    k2

    2

    k1k w

    T

    T2

    10

    0

    0

    0

    T

    T2

    1

    x

    1000

    T100

    0010

    00T1

    x

    +

    =+ (6.2)

    where the process noise w ~ N(0, Q) is a zero-mean Gaussian noise. The covariance matrix

    [ ])diag(Q 2y2

    x= uses the particular values of2

    yx 0.1m/s == for both x andy-

    axes in the first simulation. The initial state drawn from the initial estimation error

    covariance of 1002 m2 and 52 2m/s , respectively, for the position and velocity in thex and

    y-axes

    )51005100diag(P 22220 = (6.3)

    Under the same initial conditions, a Schmidt-EKF is implemented to calculate the predicted

    measurement covariance. The 100 Monte Carlo runs are used to calculate the root mean

    squared (RMS) errors.

    52

  • 8/2/2019 Skf Final Documentation

    53/80

    Fig. 6.2 Sample Sensor Trajectory

    Figure 6.2 shows the details of the sample sensor trajectory with the blue-colored truth

    (dashed line) and the green colored navigation solution (solid line).

    Fig. 6.3 Sample Target Trajectory (Run 1)

    Figure 6.3 shows the details of the sample target trajectory, where the red-colored curve is

    the truth (dashed line), the cyan-colored curve is the regular Kalman filter estimate (with

    cross line style), and the purple-colored curve is then navigation solution (solid line).

    53

  • 8/2/2019 Skf Final Documentation

    54/80

    Fig. 6.4 Sample Target Trajectory (Run 2)

    Figure 6.4 is another sample target trajectory, where the red colored curve is the truth

    (dashed line), the cyan-colored curve is the regular Kalman filter estimate (with cross line

    style), and the purple-colored curve is the navigation solution (solid line).

    Fig. 6.5 Comparison of Position Errors

    54

  • 8/2/2019 Skf Final Documentation

    55/80

    Figure 6.5 shows the RMS errors and standard deviations for the position estimate

    produced by the regular EKF and consider EKF (the term consider EKF or consider filter is

    used interchangeably with Schmidt-EKF), respectively. The RMS errors of the regular

    EKF (the green curve with plus marks +) are larger than those of the consider EKF (the

    solid cyan curve) whereas the standard deviations of the regular EKF (the dashed blue

    curve) are smaller than those of the consider EKF (the red curve with cross marks ). This

    indicates that the actual estimates of the consider EKF are smaller (better) than those of the

    regular EKF.

    Additionally, the consider filter provides a better performance bound than the optimistic

    regular EKF.

    Fig. 6.6 Comparison of Velocity Errors

    Figure 6.6 shows the RMS errors and standard deviations for the velocity estimate

    produced by the regular and consider EKF, respectively. The RMS errors of the regular

    EKF (the green curve with plus marks +) are larger than those of the consider EKF (the

    solid cyan curve). The standard deviations of the regular EKF (the dashed blue curve) and

    those of the consider EKF (the red curve with cross marks ) are comparable. Again, this

    indicates that the actual estimates of the consider EKF are smaller (better) than those of the

    55

  • 8/2/2019 Skf Final Documentation

    56/80

    regular EKF and the consider filter provides a better performance bound than the optimistic

    regular EKF.

    The discrepancy between the RMS errors and standard deviations of the consider EKF as

    shown in Figures 6.5 and 6.6 may be explained by the fact that the standard deviations are

    calculated from the propagated and updated covariances with a rather large process noise

    covariance Q whereas both the target and sensor are moving at constant speed.

    6.2 Simulation with Different Process Noise

    In the next simulation, the process noise covariance Q is reduced to 2001.0

    22 )(m/s . The sensor navigation errors are also reduced to 25 2m and 205.0 2m/s (twice

    better). Compared to Figures 6.5 and 6.6, the errors in Figures 6.7 and 6.8 are all smaller

    due to the use of smaller Q and sP More importantly, the standard deviations calculated

    from the propagated and updated covariance of the consider EKF (CEKF) now match well

    with the actual RMS errors. The CEKF-to-RMS error match indicates the consider

    covariance can better predict the targeting performance for target tracking, sensor fusion,

    and resource management.

    Fig. 6.7 Position Errors with Small Q

    56

  • 8/2/2019 Skf Final Documentation

    57/80

    Fig. 6.8 Velocity Errors with Small Q

    By comparing Figures 6.7 and 6.8, one would ask why the RMS position errors of

    regular EKF and consider EKF as in Figure 6.7 differ that much - whereas why their RMS

    velocity errors fairly agree as in Figure 6.8. Note that in the simulation, navigation errors

    are introduced only in the sensor position not in the sensor velocity and the sensor position

    errors are translated directly into the target position errors. Typically the velocity

    components of an integrated navigation solution are more accurate than the position

    counterparts. But significant sensor velocity errors could be added in the simulation.

    Furthermore, the measurements used by both filters include both range and range rate as in

    (30). The range rate measurements, only subject to random noise, help produce relatively

    good velocity estimates

    As shown in Figure 6.6. In the past, heuristic approaches have been used to adjust

    Q and Rof a Kalman filter to prevent divergence due to unestimated error terms (which are

    totally ignored) and to match the filters covariance matrix with data-driven RMS. Such

    blind adjustments do not explore the inherent structures of actual errors. In contrast, even

    not estimating the error terms, the consider filter explicitly takes into account the

    distribution and structure of the errors, thus providing better results.

    57

  • 8/2/2019 Skf Final Documentation

    58/80

    6.3 Evaluation with Unknown Truth

    In the following simulation, the sample mean and covariance are calculated by the

    Monte Carlo simulation without knowing the target state truth. The estimated-mean

    centered RMS is calculated, which is the root sample covariance trace (RSCT) defined in

    Section 4. Note that the estimated-mean centered RMS has a different meaning than the

    truth-centered RMS. It does not indicate how close the estimates are to the truth but how

    dispersed the estimates are. It may reveal how consistent the estimates are but the estimates

    may be biased. So two aspects need to be shown, one is the biasand the other is spread

    around the bias. If the target state estimate is unbiased, the sample estimates are close to the

    RMS errors values.

    Fig. 6.9 Truth-Centered Position Errors

    Figure 6.9 shows the RMS errors and standard deviations for the position estimate

    produced by the regular and consider EKF, respectively. The RMS errors of the regular

    EKF (the green curve with plus marks +) are larger than those of the consider EKF (the

    58

  • 8/2/2019 Skf Final Documentation

    59/80

    solid cyan curve) whereas the standard deviations of the regular EKF (the dashed blue

    curve) ar


Recommended